[ovs-discuss] ovs w dpdk live lock?

O Mahony, Billy billy.o.mahony at intel.com
Thu Aug 30 16:07:08 UTC 2018


Hi David,

The dpdk-lsc-interrupt worked for me using an Intel 82559. By work I mean that the there is a transitory packet loss on the loaded link when the other link is physically removed. There is also a shorter transitory packet loss when the link is re-inserted.

Without the dpdk-lsc-interrupt=true the removing the packet loss continues until the other link is re-inserted.

I did not see the issue using an X710 nic regardless of dpdk-lsc-interrupt setting.

I’ll see if the driver guys can shed some more light on the issue. And get back to you if they have further information.

Regards,
Billy.

From: ovs-discuss-bounces at openvswitch.org [mailto:ovs-discuss-bounces at openvswitch.org] On Behalf Of O Mahony, Billy
Sent: Thursday, August 30, 2018 10:00 AM
To: davidjoshuaevans at gmail.com
Cc: ovs-discuss at openvswitch.org
Subject: Re: [ovs-discuss] ovs w dpdk live lock?

Hi David,

Can you try adding options:dpdk-lsc-interrupt=true to your ovs-vsctl set interface … configuration.

It may not work for your X533 card but is worth a try.

Cheers,
/Billy



From: davidjoshuaevans at gmail.com<mailto:davidjoshuaevans at gmail.com> [mailto:davidjoshuaevans at gmail.com]
Sent: Wednesday, August 29, 2018 9:31 PM
To: O Mahony, Billy <billy.o.mahony at intel.com<mailto:billy.o.mahony at intel.com>>
Cc: ovs-discuss at openvswitch.org<mailto:ovs-discuss at openvswitch.org>
Subject: Re: [ovs-discuss] ovs w dpdk live lock?

Here's what i found

in: ixgbe_ethdev.c line 4000
because:
3995              if ((intr->flags & IXGBE_FLAG_NEED_LINK_CONFIG) &&
3996                          ixgbe_get_media_type(hw) == ixgbe_media_type_fiber)
this happens:
 ixgbe_setup_link(hw, speed, true); <-- " true" is why we are having to wait...

This interrupt flag  in the normal case - that we ought to configure the port... but what if we don't want to just now.
Perhaps DPDK has to change this logic?  AND, Ovs might need to separate the concerns of the status check from the port setup?


#0  0x00007fd08ff790eb in rte_rdtsc () at /usr/src/debug/dpdk-17.11.3/x86_64-default-linuxapp-gcc/include/rte_cycles.h:75
#1  rte_get_tsc_cycles () at /usr/src/debug/dpdk-17.11.3/x86_64-default-linuxapp-gcc/include/rte_cycles.h:89
#2  rte_get_timer_cycles () at /usr/src/debug/dpdk-17.11.3/x86_64-default-linuxapp-gcc/include/generic/rte_cycles.h:128
#3  rte_delay_us_block (us=100000) at /usr/src/debug/dpdk-17.11.3/lib/librte_eal/common/eal_common_timer.c:59
#4  0x00007fd08ab60eec in ixgbe_setup_mac_link_multispeed_fiber (hw=0x7fd03ff8b0c0, speed=128, autoneg_wait_to_complete=1) at /usr/src/debug/dpdk-17.11.3/drivers/net/ixgbe/base/ixgbe_common.c:5155
#5  0x00007fd08ab86cb4 in ixgbe_dev_link_update_share (vf=0, wait_to_complete=0, dev=0x7fd08fced400 <rte_eth_devices+66048>) at /usr/src/debug/dpdk-17.11.3/drivers/net/ixgbe/ixgbe_ethdev.c:4000
#6  ixgbe_dev_link_update (dev=0x7fd08fced400 <rte_eth_devices+66048>, wait_to_complete=0) at /usr/src/debug/dpdk-17.11.3/drivers/net/ixgbe/ixgbe_ethdev.c:4070
#7  0x00007fd08fad22ad in rte_eth_link_get_nowait (port_id=<optimized out>, eth_link=eth_link at entry=0x7ffc224b7200) at /usr/src/debug/dpdk-17.11.3/lib/librte_ether/rte_ethdev.c:1539
#8  0x000055b405791905 in check_link_status (dev=0x7fd03f22dd80) at lib/netdev-dpdk.c:713
#9  0x000055b405791a53 in netdev_dpdk_get_carrier (netdev=netdev at entry=0x7fd03f22ee00, carrier=carrier at entry=0x7ffc224b728f) at lib/netdev-dpdk.c:2673
#10 0x000055b405791abb in netdev_dpdk_get_stats (netdev=0x7fd03f22ee00, stats=0x7ffc224b75c0) at lib/netdev-dpdk.c:2406
#11 0x000055b4056e0adb in netdev_get_stats (netdev=<optimized out>, stats=stats at entry=0x7ffc224b75c0) at lib/netdev.c:1416
#12 0x000055b40564fca8 in iface_refresh_stats (iface=0x55b405f3bc20) at vswitchd/bridge.c:2406
#13 0x000055b405658d68 in run_stats_update () at vswitchd/bridge.c:2798
#14 bridge_run () at vswitchd/bridge.c:3056
#15 0x000055b40564dc9d in main (argc=5, argv=0x7ffc224b7a18) at vswitchd/ovs-vswitchd.c:119
(gdb) frame 4
#4  0x00007fd08ab60eec in ixgbe_setup_mac_link_multispeed_fiber (hw=0x7fd03ff8b0c0, speed=128, autoneg_wait_to_complete=1) at /usr/src/debug/dpdk-17.11.3/drivers/net/ixgbe/base/ixgbe_common.c:5155
5155                                      msec_delay(100);
(gdb) frame 5
#5  0x00007fd08ab86cb4 in ixgbe_dev_link_update_share (vf=0, wait_to_complete=0, dev=0x7fd08fced400 <rte_eth_devices+66048>) at /usr/src/debug/dpdk-17.11.3/drivers/net/ixgbe/ixgbe_ethdev.c:4000
4000                          ixgbe_setup_link(hw, speed, true);
(gdb) info locals
link = <optimized out>
intr = 0x7fd03ff8bff8
speed = 128
wait = 1
autoneg = 0
hw = 0x7fd03ff8b0c0
old = {link_speed = 0, link_duplex = 0, link_autoneg = 1, link_status = 0}
link_speed = 0
link_up = 100
diag = <optimized out>
(gdb) frame 6
#6  ixgbe_dev_link_update (dev=0x7fd08fced400 <rte_eth_devices+66048>, wait_to_complete=0) at /usr/src/debug/dpdk-17.11.3/drivers/net/ixgbe/ixgbe_ethdev.c:4070
4070              return ixgbe_dev_link_update_share(dev, wait_to_complete, 0);
(gdb) info locals
No locals.
(gdb) list
4065  }
4066
4067  static int
4068  ixgbe_dev_link_update(struct rte_eth_dev *dev, int wait_to_complete)
4069  {
4070              return ixgbe_dev_link_update_share(dev, wait_to_complete, 0);
4071  }
4072
4073  static int
4074  ixgbevf_dev_link_update(struct rte_eth_dev *dev, int wait_to_complete)
(gdb) frame 7
#7  0x00007fd08fad22ad in rte_eth_link_get_nowait (port_id=<optimized out>, eth_link=eth_link at entry=0x7ffc224b7200) at /usr/src/debug/dpdk-17.11.3/lib/librte_ether/rte_ethdev.c:1539
1539                          (*dev->dev_ops->link_update)(dev, 0);
(gdb) list
1534
1535              if (dev->data->dev_conf.intr_conf.lsc != 0)
1536                          rte_eth_dev_atomic_read_link_status(dev, eth_link);
1537              else {
1538                          RTE_FUNC_PTR_OR_RET(*dev->dev_ops->link_update);
1539                          (*dev->dev_ops->link_update)(dev, 0);
1540                          *eth_link = dev->data->dev_link;
1541              }
1542  }
1543
(gdb) frame 5
#5  0x00007fd08ab86cb4 in ixgbe_dev_link_update_share (vf=0, wait_to_complete=0, dev=0x7fd08fced400 <rte_eth_devices+66048>) at /usr/src/debug/dpdk-17.11.3/drivers/net/ixgbe/ixgbe_ethdev.c:4000
4000                          ixgbe_setup_link(hw, speed, true);
(gdb) list
3995              if ((intr->flags & IXGBE_FLAG_NEED_LINK_CONFIG) &&
3996                          ixgbe_get_media_type(hw) == ixgbe_media_type_fiber) {
3997                          speed = hw->phy.autoneg_advertised;
3998                          if (!speed)
3999                                      ixgbe_get_link_capabilities(hw, &speed, &autoneg);
4000                          ixgbe_setup_link(hw, speed, true);
4001              }
4002
4003              /* check if it needs to wait to complete, if lsc interrupt is enabled */
4004              if (wait_to_complete == 0 || dev->data->dev_conf.intr_conf.lsc != 0)

On Wed, Aug 29, 2018 at 11:10 AM O Mahony, Billy <billy.o.mahony at intel.com<mailto:billy.o.mahony at intel.com>> wrote:
Thanks, David. I’ll re-create the scenario here tomorrow and see what happens.

From: davidjoshuaevans at gmail.com<mailto:davidjoshuaevans at gmail.com> [mailto:davidjoshuaevans at gmail.com<mailto:davidjoshuaevans at gmail.com>]
Sent: Wednesday, August 29, 2018 2:16 PM
To: O Mahony, Billy <billy.o.mahony at intel.com<mailto:billy.o.mahony at intel.com>>
Cc: ovs-discuss at openvswitch.org<mailto:ovs-discuss at openvswitch.org>
Subject: Re: [ovs-discuss] ovs w dpdk live lock?

Thats correct Billy.

Say for instance you have pings from an internal vPort coming through to a DPDK port on a netdev bridge. (an openflow rule for each direction of traffic, no mac learning)
Pings work fine till you unplug the other device.
Then traffic will come in bursts and have massive latency up to 3 seconds or thereabouts.
Now, from the callback check_link_status down the stack...  ixgbe_setup_mac_link_multispeed_fiber is 'supposed' to be non-blocking... but i see we end up in rte_delay_us_block
And it looks to me that 'main' thread gets hooked up in that activity.

I hope i am not leading you up the garden path, but that seems to me to be where it's at.

Dave.






On Wed, Aug 29, 2018 at 7:37 AM O Mahony, Billy <billy.o.mahony at intel.com<mailto:billy.o.mahony at intel.com>> wrote:
Hi David,

I just saw this email now. Thanks, that’s interesting.

Can you confirm that the issue is a transient 3s approx. interruption in traffic when the cable of another port (not involved in the test traffic) is unplugged? Then I will try to recreate it here.

Also I’m re-adding the ovs-discuss list to the cc. “Don’t drop the list” is a regular comment on ovs-*.

Regards,
Billy

From: davidjoshuaevans at gmail.com<mailto:davidjoshuaevans at gmail.com> [mailto:davidjoshuaevans at gmail.com<mailto:davidjoshuaevans at gmail.com>]
Sent: Tuesday, August 28, 2018 9:07 PM
To: O Mahony, Billy <billy.o.mahony at intel.com<mailto:billy.o.mahony at intel.com>>
Subject: Re: [ovs-discuss] ovs w dpdk live lock?

also...
perhaps related to stats collection

#0  0x00007fe86789c0e2 in rte_delay_us_block () from /lib64/librte_eal.so.6
#1  0x00007fe8624940c1 in ixgbe_setup_mac_link_multispeed_fiber () from /lib64/librte_pmd_ixgbe.so.2
#2  0x00007fe8624b5684 in ixgbe_dev_link_update () from /lib64/librte_pmd_ixgbe.so.2
#3  0x00007fe8673f52ad in rte_eth_link_get_nowait () from /lib64/librte_ethdev.so.8
#4  0x0000560fa8333905 in check_link_status ()
#5  0x0000560fa8333a53 in netdev_dpdk_get_carrier ()
#6  0x0000560fa8333abb in netdev_dpdk_get_stats ()
#7  0x0000560fa8282adb in netdev_get_stats ()
#8  0x0000560fa81f1ca8 in iface_refresh_stats.part.26 ()
#9  0x0000560fa81fad68 in bridge_run ()
#10 0x0000560fa81efc9d in main ()

main thread seems to end up here a lot...


On Mon, Aug 27, 2018 at 11:14 AM David Evans <davidjoshuaevans at gmail.com<mailto:davidjoshuaevans at gmail.com>> wrote:
Hi Billy,
I did some more tests.
It appears to happen when i unplug the optics from the SFP.
The dpdk is 17.11.3 and ovs 2.9.2 source,, but built as RPM's and installed from the RPM.
I don't know why the RPM build reports the 17.11.0. - this might be a bug in the RPM script.

As you can see from below, i don't even have rules on the switch.
If i add rules to the switch just to force traffic from one port to another, there will be traffic interruption.. multiple second latency against the ports.

Start up script looks like this after attach all viable devices to igb_uio

ovsdb-tool create $OVS_DBDIR/conf.db $OVS_SCHEMA
echo "Starting the OVS database server"
ovsdb-server --remote=punix:"$DB_SOCK" \
                 --remote=db:Open_vSwitch,Open_vSwitch,manager_options \
                 --pidfile --detach

ovs-vsctl --no-wait init

echo "********************************"
echo "start ovs                       "
echo "********************************"
echo "Telling the OVS controller to start OVS with DPDK using 512MB hugepage memory and run the ovswitchd daemon on logical core 1"
ovs-vsctl --no-wait set Open_vSwitch . other_config:dpdk-init=true \
 other_config:dpdk-lcore-mask=0x1E other_config:dpdk-socket-mem="512"

echo "Starting the OVS daemon..."
ovs-vswitchd unix:$DB_SOCK --pidfile --log-file=/var/log/openvswitch/vswitchd.log --detach
sleep 2
echo "Before creating the DPDK ports, tell OVS to use Core 2 for the DPDK PMD"
ovs-vsctl set Open_vSwitch . other_config:pmd-cpu-mask=0x1E
ovs-vsctl add-port br0 dpdkx0 -- set Interface dpdkx0 type=dpdk options:dpdk-devargs=0000:0a:00.0 options:n_rxq=2
ovs-vsctl add-port br0 dpdkx1 -- set Interface dpdkx1 type=dpdk options:dpdk-devargs=0000:0a:00.1 options:n_rxq=2
ovs-vsctl add-port br0 dpdkx2 -- set Interface dpdkx2 type=dpdk options:dpdk-devargs=0000:0b:00.0
ovs-vsctl add-port br0 dpdkx3 -- set Interface dpdkx3 type=dpdk options:dpdk-devargs=0000:0b:00.1

echo "**********Clearing current flows"
ovs-ofctl del-flows br0

 You don't need any traffic to get these kind of messages, just un-plug the fibre.
2018-08-27T15:41:27.629Z|00483|timeval|WARN|Unreasonably long 2961ms poll interval (998ms user, 0ms system)
2018-08-27T15:41:27.629Z|00484|timeval|WARN|context switches: 1 voluntary, 105 involuntary
2018-08-27T15:41:35.469Z|00485|timeval|WARN|Unreasonably long 2840ms poll interval (955ms user, 0ms system)
2018-08-27T15:41:35.469Z|00486|timeval|WARN|context switches: 1 voluntary, 99 involuntary
2018-08-27T15:41:43.469Z|00487|timeval|WARN|Unreasonably long 3000ms poll interval (1000ms user, 0ms system)
2018-08-27T15:41:43.469Z|00488|timeval|WARN|context switches: 1 voluntary, 106 involuntary
2018-08-27T15:41:51.389Z|00489|timeval|WARN|Unreasonably long 2920ms poll interval (996ms user, 0ms system)
2018-08-27T15:41:51.389Z|00490|timeval|WARN|context switches: 1 voluntary, 102 involuntary
2018-08-27T15:41:58.369Z|00491|timeval|WARN|Unreasonably long 1980ms poll interval (992ms user, 0ms system)
2018-08-27T15:41:58.369Z|00492|timeval|WARN|context switches: 0 voluntary, 101 involuntary
2018-08-27T15:41:58.369Z|00493|coverage|INFO|Dropped 4 log messages in last 31 seconds (most recently, 7 seconds ago) due to excessive rate
2018-08-27T15:41:58.369Z|00494|coverage|INFO|Event coverage, avg rate over last: 5 seconds, last minute, last hour,  hash=dc43467d:
2018-08-27T15:41:58.369Z|00495|coverage|INFO|bridge_reconfigure         0.0/sec     0.000/sec        0.0025/sec   total: 9
2018-08-27T15:41:58.369Z|00496|coverage|INFO|ofproto_flush              0.0/sec     0.000/sec        0.0003/sec   total: 1
2018-08-27T15:41:58.369Z|00497|coverage|INFO|ofproto_recv_openflow      0.0/sec     0.000/sec        0.0008/sec   total: 6
2018-08-27T15:41:58.369Z|00498|coverage|INFO|ofproto_update_port        0.0/sec     0.033/sec        0.0036/sec   total: 13
2018-08-27T15:41:58.369Z|00499|coverage|INFO|rev_reconfigure            0.0/sec     0.000/sec        0.0028/sec   total: 10
2018-08-27T15:41:58.369Z|00500|coverage|INFO|rev_port_toggled           0.0/sec     0.017/sec        0.0017/sec   total: 6
2018-08-27T15:41:58.369Z|00501|coverage|INFO|rev_flow_table             0.0/sec     0.000/sec        0.0006/sec   total: 2
2018-08-27T15:41:58.369Z|00502|coverage|INFO|cmap_expand                0.0/sec     0.000/sec        0.0189/sec   total: 68
2018-08-27T15:41:58.369Z|00503|coverage|INFO|cmap_shrink                0.0/sec     0.000/sec        0.0142/sec   total: 51
2018-08-27T15:41:58.369Z|00504|coverage|INFO|dpif_port_add              0.0/sec     0.000/sec        0.0014/sec   total: 5
2018-08-27T15:41:58.369Z|00505|coverage|INFO|dpif_flow_flush            0.0/sec     0.000/sec        0.0006/sec   total: 2
2018-08-27T15:41:58.369Z|00506|coverage|INFO|dpif_flow_get              0.0/sec     0.000/sec        0.0058/sec   total: 21
2018-08-27T15:41:58.369Z|00507|coverage|INFO|dpif_flow_put              0.0/sec     0.000/sec        0.0061/sec   total: 22
2018-08-27T15:41:58.369Z|00508|coverage|INFO|dpif_flow_del              0.0/sec     0.000/sec        0.0058/sec   total: 21
2018-08-27T15:41:58.369Z|00509|coverage|INFO|dpif_execute               0.0/sec     0.000/sec        0.0011/sec   total: 4
2018-08-27T15:41:58.369Z|00510|coverage|INFO|flow_extract               0.0/sec     0.000/sec        0.0008/sec   total: 3
2018-08-27T15:41:58.369Z|00511|coverage|INFO|miniflow_malloc            0.0/sec     0.000/sec        0.0161/sec   total: 58
2018-08-27T15:41:58.369Z|00512|coverage|INFO|hmap_pathological          0.0/sec     0.000/sec        0.0044/sec   total: 16
2018-08-27T15:41:58.369Z|00513|coverage|INFO|hmap_expand               12.2/sec    10.233/sec        2.4458/sec   total: 8890
2018-08-27T15:41:58.369Z|00514|coverage|INFO|hmap_shrink                0.0/sec     0.000/sec        0.0039/sec   total: 14
2018-08-27T15:41:58.369Z|00515|coverage|INFO|netdev_get_stats           1.0/sec     0.667/sec        0.1550/sec   total: 563
2018-08-27T15:41:58.369Z|00516|coverage|INFO|txn_unchanged              0.2/sec     0.117/sec        0.0317/sec   total: 115
2018-08-27T15:41:58.369Z|00517|coverage|INFO|txn_incomplete             0.0/sec     0.033/sec        0.0067/sec   total: 24
2018-08-27T15:41:58.370Z|00518|coverage|INFO|txn_success                0.0/sec     0.033/sec        0.0036/sec   total: 13
2018-08-27T15:41:58.370Z|00519|coverage|INFO|poll_create_node          56.0/sec    44.267/sec        9.8494/sec   total: 35840
2018-08-27T15:41:58.370Z|00520|coverage|INFO|poll_zero_timeout          0.2/sec     0.083/sec        0.0086/sec   total: 32
2018-08-27T15:41:58.370Z|00521|coverage|INFO|rconn_queued               0.0/sec     0.000/sec        0.0006/sec   total: 5
2018-08-27T15:41:58.370Z|00522|coverage|INFO|rconn_sent                 0.0/sec     0.000/sec        0.0006/sec   total: 5
2018-08-27T15:41:58.370Z|00523|coverage|INFO|seq_change               26233.0/sec 26206.533/sec     4611.9075/sec   total: 16674366
2018-08-27T15:41:58.370Z|00524|coverage|INFO|pstream_open               0.0/sec     0.000/sec        0.0008/sec   total: 3
2018-08-27T15:41:58.370Z|00525|coverage|INFO|stream_open                0.0/sec     0.000/sec        0.0003/sec   total: 1
2018-08-27T15:41:58.370Z|00526|coverage|INFO|util_xalloc              176.8/sec   170.633/sec       38.4481/sec   total: 139578
2018-08-27T15:41:58.370Z|00527|coverage|INFO|vconn_received             0.0/sec     0.000/sec        0.0014/sec   total: 11
2018-08-27T15:41:58.370Z|00528|coverage|INFO|vconn_sent                 0.0/sec     0.000/sec        0.0011/sec   total: 10
2018-08-27T15:41:58.370Z|00529|coverage|INFO|netdev_set_policing        0.0/sec     0.000/sec        0.0008/sec   total: 3
2018-08-27T15:41:58.370Z|00530|coverage|INFO|netdev_get_ifindex         0.0/sec     0.000/sec        0.0003/sec   total: 1
2018-08-27T15:41:58.370Z|00531|coverage|INFO|netdev_get_hwaddr          0.0/sec     0.000/sec        0.0031/sec   total: 11
2018-08-27T15:41:58.370Z|00532|coverage|INFO|netdev_set_hwaddr          0.0/sec     0.000/sec        0.0006/sec   total: 2
2018-08-27T15:41:58.370Z|00533|coverage|INFO|netdev_get_ethtool         0.0/sec     0.000/sec        0.0011/sec   total: 4
2018-08-27T15:41:58.370Z|00534|coverage|INFO|netlink_received           0.2/sec     0.133/sec        0.0419/sec   total: 152
2018-08-27T15:41:58.370Z|00535|coverage|INFO|netlink_recv_jumbo         0.2/sec     0.133/sec        0.0311/sec   total: 113
2018-08-27T15:41:58.370Z|00536|coverage|INFO|netlink_sent               0.2/sec     0.133/sec        0.0361/sec   total: 131
2018-08-27T15:41:58.370Z|00537|coverage|INFO|nln_changed                0.0/sec     0.000/sec        0.0017/sec   total: 6
2018-08-27T15:41:58.370Z|00538|coverage|INFO|61 events never hit
2018-08-27T15:42:05.349Z|00539|timeval|WARN|Unreasonably long 1980ms poll interval (992ms user, 0ms system)
2018-08-27T15:42:05.349Z|00540|timeval|WARN|context switches: 0 voluntary, 102 involuntary
2018-08-27T15:42:12.329Z|00541|timeval|WARN|Unreasonably long 1980ms poll interval (991ms user, 0ms system)
2018-08-27T15:42:12.329Z|00542|timeval|WARN|context switches: 0 voluntary, 101 involuntary




On Mon, Aug 27, 2018 at 4:40 AM O Mahony, Billy <billy.o.mahony at intel.com<mailto:billy.o.mahony at intel.com>> wrote:
Hi David,

A few more queries below.

Thanks,
Billy.

David Joshua Evan wrote:
> From: ovs-discuss-bounces at openvswitch.org<mailto:ovs-discuss-bounces at openvswitch.org> [mailto:ovs-discuss-bounces at openvswitch.org<mailto:ovs-discuss-bounces at openvswitch.org>]
> Sent: Friday, August 24, 2018 10:38 PM
> To: bugs at openvswitch.org<mailto:bugs at openvswitch.org>
> Subject: [ovs-discuss] ovs w dpdk live lock?
>
>
>
> Dpdk 17.11.3
> OVS 2.9.2
> Kernel. 3.10.0-862.9.1.el7.x86_64

They are compatible version but can you double check as the log refers to 17.11.0 below (which is still compatible).

>
> If i connect intel x553 dpdk 8086:15c4  to ovs bridge, but the ports are down, or sfp not installed, this happens as below in the log
> traffic through the 'up' port will incur massive latency (up to 3 or more seconds)

This is a dual port x553? With either no SFPs or SFPs with no cable?
Can you add the script/commands you use to:
- Configure dpdk (igb_uio or vfio etc)
- Start ovs
- Configure routing (ovs-ofctl add flow ...)
- And how you are generating traffic - if both phy ports are down are you sending it via a kernel vdev to br0 ?  Please add the commands you are using.
- How and where you are detecting received traffic - (tshark? Etc).

>
> 'waiting for main to quiesce'
>
> If i only add ports that are up, it works fine afaik.
>
> 2018-08-24T21:27:12.289Z|00001|vlog|INFO|opened log file /var/log/openvswitch/vswitchd.log
> 2018-08-24T21:27:12.292Z|00002|ovs_numa|INFO|Discovered 16 CPU cores on NUMA node 0
> 2018-08-24T21:27:12.292Z|00003|ovs_numa|INFO|Discovered 1 NUMA nodes and 16 CPU cores
> 2018-08-24T21:27:12.292Z|00004|reconnect|INFO|unix:/var/run/openvswitch/db.sock: connecting...
> 2018-08-24T21:27:12.292Z|00005|reconnect|INFO|unix:/var/run/openvswitch/db.sock: connected
> 2018-08-24T21:27:12.295Z|00006|dpdk|INFO|Using DPDK 17.11.0
> 2018-08-24T21:27:12.295Z|00007|dpdk|INFO|DPDK Enabled - initializing...
> 2018-08-24T21:27:12.295Z|00008|dpdk|INFO|No vhost-sock-dir provided - defaulting to /var/run/openvswitch
> 2018-08-24T21:27:12.295Z|00009|dpdk|INFO|IOMMU support for vhost-user-client disabled.
> 2018-08-24T21:27:12.295Z|00010|dpdk|INFO|EAL ARGS: ovs-vswitchd -c 0x1E --socket-mem 512
> 2018-08-24T21:27:12.298Z|00011|dpdk|INFO|EAL: Detected 16 lcore(s)
> 2018-08-24T21:27:12.318Z|00012|dpdk|INFO|EAL: Probing VFIO support...
> 2018-08-24T21:27:13.207Z|00013|dpdk|INFO|EAL: PCI device 0000:02:00.0 on NUMA socket -1
> 2018-08-24T21:27:13.207Z|00014|dpdk|WARN|EAL:   Invalid NUMA socket, default to 0
> 2018-08-24T21:27:13.207Z|00015|dpdk|INFO|EAL:   probe driver: 8086:1533 net_e1000_igb
> 2018-08-24T21:27:13.207Z|00016|dpdk|INFO|EAL: PCI device 0000:03:00.0 on NUMA socket -1
> 2018-08-24T21:27:13.207Z|00017|dpdk|WARN|EAL:   Invalid NUMA socket, default to 0
> 2018-08-24T21:27:13.207Z|00018|dpdk|INFO|EAL:   probe driver: 8086:1533 net_e1000_igb
> 2018-08-24T21:27:13.207Z|00019|dpdk|INFO|EAL: PCI device 0000:04:00.0 on NUMA socket -1
> 2018-08-24T21:27:13.207Z|00020|dpdk|WARN|EAL:   Invalid NUMA socket, default to 0
> 2018-08-24T21:27:13.207Z|00021|dpdk|INFO|EAL:   probe driver: 8086:1533 net_e1000_igb
> 2018-08-24T21:27:13.250Z|00022|dpdk|INFO|EAL: PCI device 0000:05:00.0 on NUMA socket -1
> 2018-08-24T21:27:13.250Z|00023|dpdk|WARN|EAL:   Invalid NUMA socket, default to 0
> 2018-08-24T21:27:13.250Z|00024|dpdk|INFO|EAL:   probe driver: 8086:1533 net_e1000_igb
> 2018-08-24T21:27:13.292Z|00025|dpdk|INFO|EAL: PCI device 0000:06:00.0 on NUMA socket -1
> 2018-08-24T21:27:13.292Z|00026|dpdk|WARN|EAL:   Invalid NUMA socket, default to 0
> 2018-08-24T21:27:13.292Z|00027|dpdk|INFO|EAL:   probe driver: 8086:1521 net_e1000_igb
> 2018-08-24T21:27:13.356Z|00028|dpdk|INFO|EAL: PCI device 0000:06:00.1 on NUMA socket -1
> 2018-08-24T21:27:13.356Z|00029|dpdk|WARN|EAL:   Invalid NUMA socket, default to 0
> 2018-08-24T21:27:13.356Z|00030|dpdk|INFO|EAL:   probe driver: 8086:1521 net_e1000_igb
> 2018-08-24T21:27:13.420Z|00031|dpdk|INFO|EAL: PCI device 0000:06:00.2 on NUMA socket -1
> 2018-08-24T21:27:13.420Z|00032|dpdk|WARN|EAL:   Invalid NUMA socket, default to 0
> 2018-08-24T21:27:13.420Z|00033|dpdk|INFO|EAL:   probe driver: 8086:1521 net_e1000_igb
> 2018-08-24T21:27:13.482Z|00034|dpdk|INFO|EAL: PCI device 0000:06:00.3 on NUMA socket -1
> 2018-08-24T21:27:13.482Z|00035|dpdk|WARN|EAL:   Invalid NUMA socket, default to 0
> 2018-08-24T21:27:13.483Z|00036|dpdk|INFO|EAL:   probe driver: 8086:1521 net_e1000_igb
> 2018-08-24T21:27:13.545Z|00037|dpdk|INFO|EAL: PCI device 0000:0a:00.0 on NUMA socket -1
> 2018-08-24T21:27:13.545Z|00038|dpdk|WARN|EAL:   Invalid NUMA socket, default to 0
> 2018-08-24T21:27:13.545Z|00039|dpdk|INFO|EAL:   probe driver: 8086:15c4 net_ixgbe
> 2018-08-24T21:27:13.987Z|00040|dpdk|INFO|EAL: PCI device 0000:0a:00.1 on NUMA socket -1
> 2018-08-24T21:27:13.987Z|00041|dpdk|WARN|EAL:   Invalid NUMA socket, default to 0
> 2018-08-24T21:27:13.987Z|00042|dpdk|INFO|EAL:   probe driver: 8086:15c4 net_ixgbe
> 2018-08-24T21:27:14.419Z|00043|dpdk|INFO|EAL: PCI device 0000:0b:00.0 on NUMA socket -1
> 2018-08-24T21:27:14.419Z|00044|dpdk|WARN|EAL:   Invalid NUMA socket, default to 0
> 2018-08-24T21:27:14.419Z|00045|dpdk|INFO|EAL:   probe driver: 8086:15c4 net_ixgbe
> 2018-08-24T21:27:14.846Z|00046|dpdk|INFO|EAL: PCI device 0000:0b:00.1 on NUMA socket -1
> 2018-08-24T21:27:14.846Z|00047|dpdk|WARN|EAL:   Invalid NUMA socket, default to 0
> 2018-08-24T21:27:14.846Z|00048|dpdk|INFO|EAL:   probe driver: 8086:15c4 net_ixgbe
> 2018-08-24T21:27:15.283Z|00049|dpdk|INFO|DPDK pdump packet capture enabled
> 2018-08-24T21:27:15.283Z|00050|dpdk|INFO|DPDK Enabled - initialized
> 2018-08-24T21:27:15.285Z|00051|timeval|WARN|Unreasonably long 2991ms poll interval (2047ms user, 941ms system)
> 2018-08-24T21:27:15.285Z|00052|timeval|WARN|faults: 887 minor, 0 major
> 2018-08-24T21:27:15.285Z|00053|timeval|WARN|disk: 0 reads, 8 writes
> 2018-08-24T21:27:15.285Z|00054|timeval|WARN|context switches: 3 voluntary, 178 involuntary
> 2018-08-24T21:27:15.285Z|00055|coverage|INFO|Event coverage, avg rate over last: 5 seconds, last minute, last hour,  hash=8e477bb5:
> 2018-08-24T21:27:15.285Z|00056|coverage|INFO|bridge_reconfigure         0.0/sec     0.000/sec        0.0000/sec   total: 1
> 2018-08-24T21:27:15.285Z|00057|coverage|INFO|cmap_expand                0.0/sec     0.000/sec        0.0000/sec   total: 9
> 2018-08-24T21:27:15.286Z|00058|coverage|INFO|miniflow_malloc            0.0/sec     0.000/sec        0.0000/sec   total: 11
> 2018-08-24T21:27:15.286Z|00059|coverage|INFO|hmap_expand                0.0/sec     0.000/sec        0.0000/sec   total: 398
> 2018-08-24T21:27:15.286Z|00060|coverage|INFO|txn_unchanged              0.0/sec     0.000/sec        0.0000/sec   total: 2
> 2018-08-24T21:27:15.286Z|00061|coverage|INFO|txn_incomplete             0.0/sec     0.000/sec        0.0000/sec   total: 1
> 2018-08-24T21:27:15.286Z|00062|coverage|INFO|poll_create_node           0.0/sec     0.000/sec        0.0000/sec   total: 47
> 2018-08-24T21:27:15.286Z|00063|coverage|INFO|poll_zero_timeout          0.0/sec     0.000/sec        0.0000/sec   total: 1
> 2018-08-24T21:27:15.286Z|00064|coverage|INFO|seq_change                 0.0/sec     0.000/sec        0.0000/sec   total: 53
> 2018-08-24T21:27:15.286Z|00065|coverage|INFO|pstream_open               0.0/sec     0.000/sec        0.0000/sec   total: 1
> 2018-08-24T21:27:15.286Z|00066|coverage|INFO|stream_open                0.0/sec     0.000/sec        0.0000/sec   total: 1
> 2018-08-24T21:27:15.286Z|00067|coverage|INFO|util_xalloc                0.0/sec     0.000/sec        0.0000/sec   total: 7851
> 2018-08-24T21:27:15.286Z|00068|coverage|INFO|netdev_get_hwaddr          0.0/sec     0.000/sec        0.0000/sec   total: 3
> 2018-08-24T21:27:15.286Z|00069|coverage|INFO|netlink_received           0.0/sec     0.000/sec        0.0000/sec   total: 3
> 2018-08-24T21:27:15.286Z|00070|coverage|INFO|netlink_sent               0.0/sec     0.000/sec        0.0000/sec   total: 1
> 2018-08-24T21:27:15.286Z|00071|coverage|INFO|89 events never hit
> 2018-08-24T21:27:15.286Z|00072|bridge|INFO|ovs-vswitchd (Open vSwitch) 2.9.2
> 2018-08-24T21:27:27.358Z|00073|memory|INFO|4668 kB peak resident set size after 15.1 seconds
> 2018-08-24T21:27:27.368Z|00074|ofproto_dpif|INFO|netdev at ovs-netdev: Datapath supports recirculation
> 2018-08-24T21:27:27.368Z|00075|ofproto_dpif|INFO|netdev at ovs-netdev: VLAN header stack length probed as 1
> 2018-08-24T21:27:27.368Z|00076|ofproto_dpif|INFO|netdev at ovs-netdev: MPLS label stack length probed as 3
> 2018-08-24T21:27:27.368Z|00077|ofproto_dpif|INFO|netdev at ovs-netdev: Datapath supports truncate action
> 2018-08-24T21:27:27.368Z|00078|ofproto_dpif|INFO|netdev at ovs-netdev: Datapath supports unique flow ids
> 2018-08-24T21:27:27.368Z|00079|ofproto_dpif|INFO|netdev at ovs-netdev: Datapath supports clone action
> 2018-08-24T21:27:27.369Z|00080|ofproto_dpif|INFO|netdev at ovs-netdev: Max sample nesting level probed as 10
> 2018-08-24T21:27:27.369Z|00081|ofproto_dpif|INFO|netdev at ovs-netdev: Datapath supports eventmask in conntrack action
> 2018-08-24T21:27:27.369Z|00082|ofproto_dpif|INFO|netdev at ovs-netdev: Datapath supports ct_clear action
> 2018-08-24T21:27:27.369Z|00083|ofproto_dpif|INFO|netdev at ovs-netdev: Datapath supports ct_state
> 2018-08-24T21:27:27.369Z|00084|ofproto_dpif|INFO|netdev at ovs-netdev: Datapath supports ct_zone
> 2018-08-24T21:27:27.369Z|00085|ofproto_dpif|INFO|netdev at ovs-netdev: Datapath supports ct_mark
> 2018-08-24T21:27:27.369Z|00086|ofproto_dpif|INFO|netdev at ovs-netdev: Datapath supports ct_label
> 2018-08-24T21:27:27.369Z|00087|ofproto_dpif|INFO|netdev at ovs-netdev: Datapath supports ct_state_nat
> 2018-08-24T21:27:27.369Z|00088|ofproto_dpif|INFO|netdev at ovs-netdev: Datapath supports ct_orig_tuple
> 2018-08-24T21:27:27.369Z|00089|ofproto_dpif|INFO|netdev at ovs-netdev: Datapath supports ct_orig_tuple6
> 2018-08-24T21:27:27.369Z|00090|bridge|INFO|bridge br0: added interface br0 on port 65534
> 2018-08-24T21:27:27.382Z|00091|bridge|INFO|bridge br0: using datapath ID 00009e03dd9e9b43
> 2018-08-24T21:27:27.383Z|00092|connmgr|INFO|br0: added service controller "punix:/var/run/openvswitch/br0.mgmt"
> 2018-08-24T21:27:31.420Z|00093|connmgr|INFO|br0<->unix#1: 1 flow_mods in the last 0 s (1 deletes)
> 2018-08-24T21:27:33.450Z|00094|dpif_netdev|INFO|PMD thread on numa_id: 0, core id:  3 created.
> 2018-08-24T21:27:33.450Z|00095|dpif_netdev|INFO|PMD thread on numa_id: 0, core id:  1 created.
> 2018-08-24T21:27:33.451Z|00096|dpif_netdev|INFO|PMD thread on numa_id: 0, core id:  2 created.
> 2018-08-24T21:27:33.451Z|00097|dpif_netdev|INFO|PMD thread on numa_id: 0, core id:  4 created.
> 2018-08-24T21:27:33.451Z|00098|dpif_netdev|INFO|There are 4 pmd threads on numa node 0
> 2018-08-24T21:27:35.517Z|00099|dpdk|INFO|PMD: ixgbe_dev_link_status_print():  Port 8: Link Down
> 2018-08-24T21:27:35.517Z|00100|netdev_dpdk|INFO|Port 8: 3c:2c:99:f8:80:ea
> 2018-08-24T21:27:37.212Z|00101|dpif_netdev|INFO|Core 3 on numa node 0 assigned port 'data_10g-2' rx queue 0 (measured processing cycles 0).
> 2018-08-24T21:27:37.212Z|00102|dpif_netdev|INFO|Core 2 on numa node 0 assigned port 'data_10g-2' rx queue 1 (measured processing cycles 0).
> 2018-08-24T21:27:38.059Z|00103|bridge|INFO|bridge br0: added interface data_10g-2 on port 1
> 2018-08-24T21:27:39.426Z|00001|ovs_rcu(urcu2)|WARN|blocked 1000 ms waiting for main to quiesce
> 2018-08-24T21:27:39.760Z|00104|bridge|INFO|bridge br0: using datapath ID 00003c2c99f880ea
> 2018-08-24T21:27:39.760Z|00105|timeval|WARN|Unreasonably long 6311ms poll interval (5140ms user, 63ms system)
> 2018-08-24T21:27:39.760Z|00106|timeval|WARN|faults: 51 minor, 0 major
> 2018-08-24T21:27:39.760Z|00107|timeval|WARN|context switches: 4 voluntary, 20012 involuntary
> 2018-08-24T21:27:39.761Z|00108|coverage|INFO|Event coverage, avg rate over last: 5 seconds, last minute, last hour,  hash=83188855:
> 2018-08-24T21:27:39.761Z|00109|coverage|INFO|bridge_reconfigure         0.0/sec     0.067/sec        0.0011/sec   total: 5
> 2018-08-24T21:27:39.761Z|00110|coverage|INFO|ofproto_flush              0.0/sec     0.017/sec        0.0003/sec   total: 1
> 2018-08-24T21:27:39.761Z|00111|coverage|INFO|ofproto_recv_openflow      0.0/sec     0.050/sec        0.0008/sec   total: 3
> 2018-08-24T21:27:39.761Z|00112|coverage|INFO|ofproto_update_port        0.0/sec     0.050/sec        0.0008/sec   total: 4
> 2018-08-24T21:27:39.761Z|00113|coverage|INFO|rev_reconfigure            0.0/sec     0.067/sec        0.0011/sec   total: 5
> 2018-08-24T21:27:39.761Z|00114|coverage|INFO|rev_flow_table             0.0/sec     0.033/sec        0.0006/sec   total: 2
> 2018-08-24T21:27:39.761Z|00115|coverage|INFO|xlate_actions              0.0/sec     0.167/sec        0.0028/sec   total: 10
> 2018-08-24T21:27:39.761Z|00116|coverage|INFO|cmap_expand                0.0/sec     1.217/sec        0.0203/sec   total: 73
> 2018-08-24T21:27:39.761Z|00117|coverage|INFO|cmap_shrink                0.0/sec     0.850/sec        0.0142/sec   total: 54
> 2018-08-24T21:27:39.761Z|00118|coverage|INFO|dpif_port_add              0.0/sec     0.017/sec        0.0003/sec   total: 2
> 2018-08-24T21:27:39.761Z|00119|coverage|INFO|dpif_flow_flush            0.0/sec     0.033/sec        0.0006/sec   total: 2
> 2018-08-24T21:27:39.761Z|00120|coverage|INFO|dpif_flow_get              0.0/sec     0.350/sec        0.0058/sec   total: 21
> 2018-08-24T21:27:39.761Z|00121|coverage|INFO|dpif_flow_put              0.0/sec     0.367/sec        0.0061/sec   total: 22
> 2018-08-24T21:27:39.761Z|00122|coverage|INFO|dpif_flow_del              0.0/sec     0.350/sec        0.0058/sec   total: 22
> 2018-08-24T21:27:39.761Z|00123|coverage|INFO|dpif_execute               0.0/sec     0.067/sec        0.0011/sec   total: 4
> 2018-08-24T21:27:39.761Z|00124|coverage|INFO|flow_extract               0.0/sec     0.050/sec        0.0008/sec   total: 3
> 2018-08-24T21:27:39.761Z|00125|coverage|INFO|miniflow_malloc            0.0/sec     1.050/sec        0.0175/sec   total: 63
> 2018-08-24T21:27:39.761Z|00126|coverage|INFO|hmap_pathological          0.0/sec     0.267/sec        0.0044/sec   total: 16
> 2018-08-24T21:27:39.762Z|00127|coverage|INFO|hmap_expand                6.8/sec    12.967/sec        0.2161/sec   total: 846
> 2018-08-24T21:27:39.762Z|00128|coverage|INFO|hmap_shrink                0.0/sec     0.000/sec        0.0000/sec   total: 1
> 2018-08-24T21:27:39.762Z|00129|coverage|INFO|mac_learning_learned       0.0/sec     0.017/sec        0.0003/sec   total: 1
> 2018-08-24T21:27:39.762Z|00130|coverage|INFO|netdev_received            0.0/sec     0.100/sec        0.0017/sec   total: 8
> 2018-08-24T21:27:39.762Z|00131|coverage|INFO|netdev_get_stats           0.2/sec     0.050/sec        0.0008/sec   total: 4
> 2018-08-24T21:27:39.762Z|00132|coverage|INFO|txn_unchanged              0.0/sec     0.100/sec        0.0017/sec   total: 6
> 2018-08-24T21:27:39.762Z|00133|coverage|INFO|txn_incomplete             0.2/sec     0.133/sec        0.0022/sec   total: 8
> 2018-08-24T21:27:39.762Z|00134|coverage|INFO|txn_success                0.2/sec     0.083/sec        0.0014/sec   total: 5
> 2018-08-24T21:27:39.762Z|00135|coverage|INFO|poll_create_node          28.8/sec    24.750/sec        0.4125/sec   total: 30710
> 2018-08-24T21:27:39.762Z|00136|coverage|INFO|poll_zero_timeout          0.0/sec     0.083/sec        0.0014/sec   total: 6
> 2018-08-24T21:27:39.762Z|00137|coverage|INFO|rconn_queued               0.0/sec     0.033/sec        0.0006/sec   total: 2
> 2018-08-24T21:27:39.762Z|00138|coverage|INFO|rconn_sent                 0.0/sec     0.033/sec        0.0006/sec   total: 2
> 2018-08-24T21:27:39.762Z|00139|coverage|INFO|seq_change               1038.4/sec   204.633/sec        3.4106/sec   total: 45338
> 2018-08-24T21:27:39.762Z|00140|coverage|INFO|pstream_open               0.0/sec     0.050/sec        0.0008/sec   total: 3
> 2018-08-24T21:27:39.762Z|00141|coverage|INFO|stream_open                0.0/sec     0.017/sec        0.0003/sec   total: 1
> 2018-08-24T21:27:39.762Z|00142|coverage|INFO|util_xalloc              131.2/sec   255.867/sec        4.2644/sec   total: 74499
> 2018-08-24T21:27:39.762Z|00143|coverage|INFO|vconn_received             0.0/sec     0.083/sec        0.0014/sec   total: 5
> 2018-08-24T21:27:39.762Z|00144|coverage|INFO|vconn_sent                 0.0/sec     0.067/sec        0.0011/sec   total: 4
> 2018-08-24T21:27:39.762Z|00145|coverage|INFO|netdev_set_policing        0.0/sec     0.033/sec        0.0006/sec   total: 2
> 2018-08-24T21:27:39.762Z|00146|coverage|INFO|netdev_get_ifindex         0.0/sec     0.017/sec        0.0003/sec   total: 1
> 2018-08-24T21:27:39.762Z|00147|coverage|INFO|netdev_get_hwaddr          0.0/sec     0.200/sec        0.0033/sec   total: 12
> 2018-08-24T21:27:39.763Z|00148|coverage|INFO|netdev_set_hwaddr          0.0/sec     0.017/sec        0.0003/sec   total: 2
> 2018-08-24T21:27:39.763Z|00149|coverage|INFO|netdev_get_ethtool         0.0/sec     0.050/sec        0.0008/sec   total: 3
> 2018-08-24T21:27:39.763Z|00150|coverage|INFO|netlink_received           0.4/sec     0.800/sec        0.0133/sec   total: 48
> 2018-08-24T21:27:39.763Z|00151|coverage|INFO|netlink_recv_jumbo         0.2/sec     0.050/sec        0.0008/sec   total: 3
> 2018-08-24T21:27:39.763Z|00152|coverage|INFO|netlink_sent               0.4/sec     0.350/sec        0.0058/sec   total: 21
> 2018-08-24T21:27:39.763Z|00153|coverage|INFO|nln_changed                0.0/sec     0.167/sec        0.0028/sec   total: 10
> 2018-08-24T21:27:39.763Z|00154|coverage|INFO|59 events never hit
> 2018-08-24T21:27:39.763Z|00155|poll_loop|INFO|wakeup due to [POLLIN] on fd 54 (FIFO pipe:[183625]) at lib/ovs-rcu.c:229 (57% CPU usage)
> 2018-08-24T21:27:41.263Z|00002|ovs_rcu(urcu2)|WARN|blocked 1000 ms waiting for main to quiesce
> 2018-08-24T21:27:42.263Z|00003|ovs_rcu(urcu2)|WARN|blocked 2000 ms waiting for main to quiesce
> 2018-08-24T21:27:43.158Z|00156|timeval|WARN|Unreasonably long 3394ms poll interval (2231ms user, 76ms system)
> 2018-08-24T21:27:43.158Z|00157|timeval|WARN|faults: 1 minor, 0 major
> 2018-08-24T21:27:43.158Z|00158|timeval|WARN|context switches: 1 voluntary, 18404 involuntary
> 2018-08-24T21:27:43.158Z|00159|coverage|INFO|Event coverage, avg rate over last: 5 seconds, last minute, last hour,  hash=30651da8:
> 2018-08-24T21:27:43.158Z|00160|coverage|INFO|bridge_reconfigure         0.2/sec     0.083/sec        0.0014/sec   total: 5
> 2018-08-24T21:27:43.158Z|00161|coverage|INFO|ofproto_flush              0.0/sec     0.017/sec        0.0003/sec   total: 1
> 2018-08-24T21:27:43.158Z|00162|coverage|INFO|ofproto_recv_openflow      0.0/sec     0.050/sec        0.0008/sec   total: 3
> 2018-08-24T21:27:43.158Z|00163|coverage|INFO|ofproto_update_port        0.2/sec     0.067/sec        0.0011/sec   total: 4
> 2018-08-24T21:27:43.158Z|00164|coverage|INFO|rev_reconfigure            0.2/sec     0.083/sec        0.0014/sec   total: 5
> 2018-08-24T21:27:43.158Z|00165|coverage|INFO|rev_flow_table             0.0/sec     0.033/sec        0.0006/sec   total: 2
> 2018-08-24T21:27:43.158Z|00166|coverage|INFO|xlate_actions              0.4/sec     0.200/sec        0.0033/sec   total: 12
> 2018-08-24T21:27:43.158Z|00167|coverage|INFO|cmap_expand                0.0/sec     1.217/sec        0.0203/sec   total: 87
> 2018-08-24T21:27:43.158Z|00168|coverage|INFO|cmap_shrink                0.8/sec     0.917/sec        0.0153/sec   total: 69
> 2018-08-24T21:27:43.158Z|00169|coverage|INFO|dpif_port_add              0.2/sec     0.033/sec        0.0006/sec   total: 2
> 2018-08-24T21:27:43.158Z|00170|coverage|INFO|dpif_flow_flush            0.0/sec     0.033/sec        0.0006/sec   total: 2
> 2018-08-24T21:27:43.158Z|00171|coverage|INFO|dpif_flow_get              0.0/sec     0.350/sec        0.0058/sec   total: 21
> 2018-08-24T21:27:43.158Z|00172|coverage|INFO|dpif_flow_put              0.0/sec     0.367/sec        0.0061/sec   total: 22
> 2018-08-24T21:27:43.158Z|00173|coverage|INFO|dpif_flow_del              0.4/sec     0.383/sec        0.0064/sec   total: 23
> 2018-08-24T21:27:43.158Z|00174|coverage|INFO|dpif_execute               0.0/sec     0.067/sec        0.0011/sec   total: 4
> 2018-08-24T21:27:43.158Z|00175|coverage|INFO|flow_extract               0.0/sec     0.050/sec        0.0008/sec   total: 3
> 2018-08-24T21:27:43.158Z|00176|coverage|INFO|miniflow_malloc            0.0/sec     1.050/sec        0.0175/sec   total: 85
> 2018-08-24T21:27:43.158Z|00177|coverage|INFO|hmap_pathological          0.0/sec     0.267/sec        0.0044/sec   total: 16
> 2018-08-24T21:27:43.158Z|00178|coverage|INFO|hmap_expand               13.6/sec    14.100/sec        0.2350/sec   total: 884
> 2018-08-24T21:27:43.158Z|00179|coverage|INFO|hmap_shrink                0.2/sec     0.017/sec        0.0003/sec   total: 1
> 2018-08-24T21:27:43.158Z|00180|coverage|INFO|mac_learning_learned       0.0/sec     0.017/sec        0.0003/sec   total: 1
> 2018-08-24T21:27:43.158Z|00181|coverage|INFO|netdev_received            0.4/sec     0.133/sec        0.0022/sec   total: 8
> 2018-08-24T21:27:43.158Z|00182|coverage|INFO|netdev_get_stats           0.2/sec     0.067/sec        0.0011/sec   total: 6
> 2018-08-24T21:27:43.158Z|00183|coverage|INFO|txn_unchanged              0.0/sec     0.100/sec        0.0017/sec   total: 6
> 2018-08-24T21:27:43.158Z|00184|coverage|INFO|txn_incomplete             0.0/sec     0.133/sec        0.0022/sec   total: 11
> 2018-08-24T21:27:43.158Z|00185|coverage|INFO|txn_success                0.0/sec     0.083/sec        0.0014/sec   total: 5
> 2018-08-24T21:27:43.158Z|00186|coverage|INFO|poll_create_node         12398.0/sec  1057.917/sec       17.6319/sec   total: 79467
> 2018-08-24T21:27:43.158Z|00187|coverage|INFO|poll_zero_timeout          0.4/sec     0.117/sec        0.0019/sec   total: 8
>
>
>
> [root at beast bin]# ovs-vsctl show
> 3465b41d-2f42-4dd4-8f25-90bdd69384ae
>     Bridge "br0"
>         Port "data_10g-3"
>             Interface "data_10g-3"
>                 type: dpdk
>                 options: {dpdk-devargs="0000:0b:00.1", n_rxq="2"}
>         Port "data_10g-2"
>             Interface "data_10g-2"
>                 type: dpdk
>                 options: {dpdk-devargs="0000:0b:00.0", n_rxq="2"}
>         Port "br0"
>             Interface "br0"
>                 type: internal
> [root at beast bin]# ovs-ofctl dump-ports br0
> OFPST_PORT reply (xid=0x2): 3 ports
>   port LOCAL: rx pkts=184, bytes=16044, drop=0, errs=0, frame=0, over=0, crc=0
>            tx pkts=0, bytes=0, drop=0, errs=0, coll=0
>   port  "data_10g-2": rx pkts=0, bytes=0, drop=0, errs=0, frame=?, over=?, crc=?
>            tx pkts=0, bytes=0, drop=0, errs=0, coll=?
>   port  "data_10g-3": rx pkts=0, bytes=0, drop=0, errs=0, frame=?, over=?, crc=?
>            tx pkts=0, bytes=0, drop=0, errs=0, coll=?
> [root at beast bin]# ovs-ofctl show br0
> OFPT_FEATURES_REPLY (xid=0x2): dpid:00003c2c99f880ea
> n_tables:254, n_buffers:0
> capabilities: FLOW_STATS TABLE_STATS PORT_STATS QUEUE_STATS ARP_MATCH_IP
> actions: output enqueue set_vlan_vid set_vlan_pcp strip_vlan mod_dl_src mod_dl_dst mod_nw_src mod_nw_dst mod_nw_tos mod_tp_src mod_tp_dst
>  1(data_10g-2): addr:3c:2c:99:f8:80:ea
>      config:     0
>      state:      LINK_DOWN
>      current:    10MB-FD 100MB-FD 10GB-FD FIBER AUTO_NEG AUTO_PAUSE
>      speed: 10000 Mbps now, 0 Mbps max
>  2(data_10g-3): addr:3c:2c:99:f8:80:eb
>      config:     0
>      state:      LINK_DOWN
>      current:    10MB-FD 100MB-FD 10GB-FD FIBER AUTO_NEG AUTO_PAUSE
>      speed: 10000 Mbps now, 0 Mbps max
>  LOCAL(br0): addr:3c:2c:99:f8:80:ea
>      config:     0
>      state:      0
>      current:    10MB-FD COPPER
>      speed: 10 Mbps now, 0 Mbps max
> OFPT_GET_CONFIG_REPLY (xid=0x4): frags=normal miss_send_len=0
>
>
-------------- next part --------------
An HTML attachment was scrubbed...
URL: <http://mail.openvswitch.org/pipermail/ovs-discuss/attachments/20180830/a8c67e52/attachment-0001.html>


More information about the discuss mailing list