[ovs-discuss] Problem on creating bridge interface on OVS_DPDK in a VMware VM guest

BALL SUN paulrbk at gmail.com
Wed Sep 27 03:26:52 UTC 2017


below is the backtrace

# gdb /usr/local/sbin/ovs-vswitchd coredump
GNU gdb (GDB) Red Hat Enterprise Linux 7.6.1-94.el7
Copyright (C) 2013 Free Software Foundation, Inc.
License GPLv3+: GNU GPL version 3 or later <http://gnu.org/licenses/gpl.html>
This is free software: you are free to change and redistribute it.
There is NO WARRANTY, to the extent permitted by law.  Type "show copying"
and "show warranty" for details.
This GDB was configured as "x86_64-redhat-linux-gnu".
For bug reporting instructions, please see:
<http://www.gnu.org/software/gdb/bugs/>...
Reading symbols from /usr/local/sbin/ovs-vswitchd...done.
[New LWP 2716]
[New LWP 2707]
[New LWP 2712]
[New LWP 2711]
[New LWP 2713]
[New LWP 2708]
[New LWP 2714]
[New LWP 2715]
[New LWP 2706]
[New LWP 2710]
[New LWP 2709]
[Thread debugging using libthread_db enabled]
Using host libthread_db library "/lib64/libthread_db.so.1".
Core was generated by `/usr/local/sbin/ovs-vswitchd
--pidfile=/root/run/ovs-vswitchd.pid'.
Program terminated with signal 11, Segmentation fault.
#0  0x000000000068ad75 in vmxnet3_recv_pkts ()
(gdb) next
The program is not being run.
(gdb) bt
#0  0x000000000068ad75 in vmxnet3_recv_pkts ()
#1  0x00000000007d2252 in rte_eth_rx_burst (nb_pkts=32,
rx_pkts=0x7f3e4bffe7b0, queue_id=0, port_id=0 '\000')
    at /data1/build/dpdk-stable-17.05.1/x86_64-native-linuxapp-gcc/include/rte_ethdev.h:2774
#2  netdev_dpdk_rxq_recv (rxq=<optimized out>, batch=0x7f3e4bffe7a0)
at lib/netdev-dpdk.c:1664
#3  0x000000000072e571 in netdev_rxq_recv (rx=rx at entry=0x7f3e5cc4a680,
batch=batch at entry=0x7f3e4bffe7a0) at lib/netdev.c:701
#4  0x000000000070ab0e in dp_netdev_process_rxq_port
(pmd=pmd at entry=0x29e5e20, rx=0x7f3e5cc4a680, port_no=1) at
lib/dpif-netdev.c:3114
#5  0x000000000070ad76 in pmd_thread_main (f_=<optimized out>) at
lib/dpif-netdev.c:3854
#6  0x000000000077e4b4 in ovsthread_wrapper (aux_=<optimized out>) at
lib/ovs-thread.c:348
#7  0x00007f3e5fe07dc5 in start_thread (arg=0x7f3e4bfff700) at
pthread_create.c:308
#8  0x00007f3e5f3eb73d in clone () at
../sysdeps/unix/sysv/linux/x86_64/clone.S:113
(gdb) bt

On Wed, Sep 27, 2017 at 9:15 AM, Sun Paul <paulrbk at gmail.com> wrote:
> Hi
>
> I am trying to decode the core dump, however, I am not familiar with
> the command to run the debug, can you please provide the steps?
>
> the linux version is CentOS7 (3.10.0-514.el7.x86_64), and we currently
> try to run the OVS+DPDK as a VMware guest. the expected topology would
> be connect another two nodes with this OVS+DPDK vm.
>
> On Wed, Sep 27, 2017 at 5:13 AM, Darrell Ball <dball at vmware.com> wrote:
>>
>>
>> On 9/25/17, 11:18 PM, "Sun Paul" <paulrbk at gmail.com> wrote:
>>
>>     hi
>>
>>     I am trying to use the vmxnet3 (0000:03:00.0) as the dpdk0 interface.
>>
>>     # ./dpdk-devbind.py --status
>>
>>     Network devices using DPDK-compatible driver
>>     ============================================
>>     0000:03:00.0 'VMXNET3 Ethernet Controller 07b0' drv=igb_uio unused=
>>
>>     Network devices using kernel driver
>>     ===================================
>>     0000:02:01.0 '82545EM Gigabit Ethernet Controller (Copper) 100f'
>>     if=ens33 drv=e1000 unused=igb_uio *Active*
>>
>>     Other Network devices
>>     =====================
>>     0000:0b:00.0 'VMXNET3 Ethernet Controller 07b0' unused=igb_uio
>>
>>
>>     so, when I try to execute "ovs-vsctl add-port g1 dpdk0 -- set
>>     Interface dpdk0 type=dpdk options:dpdk-devargs=0000:03:00.0"
>>
>>     I got core dump as below.
>>
>> [Darrell]
>> Pls decode your core dump (btw: probably you will get this request on some of the other threads you created)
>> What version of rhel is this ?
>> Can you describe your environment and your setup steps?
>>
>>
>>     Sep 27 11:28:50 dlocalhost ovs-vsctl: ovs|00001|vsctl|INFO|Called as
>>     ovs-vsctl del-port g1 dpdk0
>>     Sep 27 11:28:50 dlocalhost ovs-vsctl: ovs|00002|db_ctl_base|ERR|no
>>     port named dpdk0
>>     Sep 27 11:28:51 dlocalhost ovs-vswitchd[2858]:
>>     ovs|00071|memory|INFO|20076 kB peak resident set size after 10.3
>>     seconds
>>     Sep 27 11:28:51 dlocalhost ovs-vswitchd:
>>     2017-09-27T03:28:51Z|00071|memory|INFO|20076 kB peak resident set size
>>     after 10.3 seconds
>>     Sep 27 11:28:51 dlocalhost ovs-vswitchd:
>>     2017-09-27T03:28:51Z|00072|memory|INFO|handlers:2 ports:1
>>     revalidators:2 rules:5
>>     Sep 27 11:28:51 dlocalhost ovs-vswitchd[2858]:
>>     ovs|00072|memory|INFO|handlers:2 ports:1 revalidators:2 rules:5
>>     Sep 27 11:29:10 dlocalhost ovs-vsctl: ovs|00001|vsctl|INFO|Called as
>>     ovs-vsctl add-port g1 dpdk0 -- set Interface dpdk0 type=dpdk
>>     options:dpdk-devargs=0000:03:00.0
>>     Sep 27 11:29:10 dlocalhost ovs-vswitchd[2858]:
>>     ovs|00073|dpif_netdev|INFO|PMD thread on numa_id: 0, core id:  0
>>     created.
>>     Sep 27 11:29:10 dlocalhost ovs-vswitchd:
>>     2017-09-27T03:29:10Z|00073|dpif_netdev|INFO|PMD thread on numa_id: 0,
>>     core id:  0 created.
>>     Sep 27 11:29:10 dlocalhost ovs-vswitchd:
>>     2017-09-27T03:29:10Z|00074|dpif_netdev|INFO|There are 1 pmd threads on
>>     numa node 0
>>     Sep 27 11:29:10 dlocalhost ovs-vswitchd[2858]:
>>     ovs|00074|dpif_netdev|INFO|There are 1 pmd threads on numa node 0
>>     Sep 27 11:29:10 dlocalhost ovs-vswitchd[2858]:
>>     ovs|00075|netdev_dpdk|WARN|Rx checksum offload is not supported on
>>     port 0
>>     Sep 27 11:29:10 dlocalhost ovs-vswitchd:
>>     2017-09-27T03:29:10Z|00075|netdev_dpdk|WARN|Rx checksum offload is not
>>     supported on port 0
>>     Sep 27 11:29:10 dlocalhost ovs-vswitchd:
>>     2017-09-27T03:29:10Z|00076|netdev_dpdk|ERR|Interface dpdk0 MTU (1500)
>>     setup error: Operation not supported
>>     Sep 27 11:29:10 dlocalhost ovs-vswitchd:
>>     2017-09-27T03:29:10Z|00077|netdev_dpdk|ERR|Interface dpdk0(rxq:1
>>     txq:2) configure error: Operation not supported
>>     Sep 27 11:29:10 dlocalhost ovs-vswitchd[2858]:
>>     ovs|00076|netdev_dpdk|ERR|Interface dpdk0 MTU (1500) setup error:
>>     Operation not supported
>>     Sep 27 11:29:10 dlocalhost ovs-vswitchd[2858]:
>>     ovs|00077|netdev_dpdk|ERR|Interface dpdk0(rxq:1 txq:2) configure
>>     error: Operation not supported
>>     Sep 27 11:29:10 dlocalhost kernel: pmd12[2885]: segfault at 64 ip
>>     000000000067fa98 sp 00007f588d7f9680 error 4 in
>>     ovs-vswitchd[400000+566000]
>>     Sep 27 11:29:10 dlocalhost abrt-hook-ccpp: Process 2858 (ovs-vswitchd)
>>     of user 0 killed by SIGSEGV - dumping core
>>     Sep 27 11:29:10 dlocalhost abrt-hook-ccpp: Failed to create
>>     core_backtrace: waitpid failed: No child processes
>>     Sep 27 11:29:10 dlocalhost systemd: ovs-vswitchd.service: main process
>>     exited, code=killed, status=11/SEGV
>>     Sep 27 11:29:10 dlocalhost systemd: Unit ovs-vswitchd.service entered
>>     failed state.
>>     Sep 27 11:29:10 dlocalhost systemd: ovs-vswitchd.service failed.
>>     Sep 27 11:29:11 dlocalhost abrt-server: Executable
>>     '/usr/local/sbin/ovs-vswitchd' doesn't belong to any package and
>>     ProcessUnpackaged is set to 'no'
>>     Sep 27 11:29:11 dlocalhost abrt-server: 'post-create' on
>>     '/var/spool/abrt/ccpp-2017-09-27-11:29:10-2858' exited with 1
>>     Sep 27 11:29:11 dlocalhost abrt-server: Deleting problem directory
>>     '/var/spool/abrt/ccpp-2017-09-27-11:29:10-2858'
>>
>>
>>
>>     On Tue, Sep 26, 2017 at 1:09 PM, Darrell Ball <dball at vmware.com> wrote:
>>     >
>>     >
>>     > On 9/25/17, 9:26 PM, "Sun Paul" <paulrbk at gmail.com> wrote:
>>     >
>>     >     we are evauluating the DPDK on VMXNET3 , is sit supported in the VMware guest?
>>     >
>>     > I see what you are doing now; I think it should be supported.
>>     > What driver are you using to bind the “vmxnet3 nic” ?
>>     > Probably you should provide all the steps you are using and your environment.
>>     >
>>     >
>>     >
>>     >     On Tue, Sep 26, 2017 at 12:09 PM, Darrell Ball <dball at vmware.com> wrote:
>>     >     > Do you have a hard requirement to use vmxnet3 ?
>>     >     > What are you requirements otherwise ?
>>     >     >
>>     >     >
>>     >     > On 9/25/17, 9:06 PM, "ovs-discuss-bounces at openvswitch.org on behalf of Darrell Ball" <ovs-discuss-bounces at openvswitch.org on behalf of dball at vmware.com> wrote:
>>     >     >
>>     >     >
>>     >     >
>>     >     >     On 9/25/17, 8:31 PM, "Sun Paul" <paulrbk at gmail.com> wrote:
>>     >     >
>>     >     >         Hi
>>     >     >
>>     >     >         thank for the reply. but can you explain what I should do?
>>     >     >
>>     >     >
>>     >     >     Do you have a hard requirement to use ESXi and vmxnet3 ?
>>     >     >     What are you requirements otherwise ?
>>     >     >
>>     >     >
>>     >     >
>>     >     >         On Tue, Sep 26, 2017 at 3:18 AM, Darrell Ball <dball at vmware.com> wrote:
>>     >     >         >
>>     >     >         >
>>     >     >         > On 9/25/17, 2:53 AM, "ovs-discuss-bounces at openvswitch.org on behalf of Sun Paul" <ovs-discuss-bounces at openvswitch.org on behalf of paulrbk at gmail.com> wrote:
>>     >     >         >
>>     >     >         >     Hi
>>     >     >         >
>>     >     >         >     I am trying to have a OVS+DPDK setup on a VM guest in Vmware
>>     >     >         >     environment. the network adapter type for the dpdk is vmxnet3.
>>     >     >         >
>>     >     >         > The present support for guest connectivity for OVS-DPDK is here:
>>     >     >         > https://urldefense.proofpoint.com/v2/url?u=http-3A__docs.openvswitch.org_en_latest_topics_dpdk_vhost-2Duser_&d=DwIFaQ&c=uilaK90D4TOVoH58JNXRgQ&r=BVhFA09CGX7JQ5Ih-uZnsw&m=U9YEU77bZamqSg208ehS8-3M4sgEVZoZE9yILybkOuU&s=6iVy8pqpuoo_1QAliEi37GLAiW2wEkUl5UGZMf7l5WA&e=
>>     >     >         >
>>     >     >         >
>>     >     >         >     when I am trying to add-port on it, it failed. any idea?
>>     >     >         >
>>     >     >         >     Sep 26 17:51:57 plocalhost ovs-vswitchd:
>>     >     >         >     2017-09-26T09:51:57Z|00058|netdev_dpdk|ERR|Interface dpdk0 MTU (1500)
>>     >     >         >     setup error: Operation not supported
>>     >     >         >     Sep 26 17:51:57 plocalhost ovs-vswitchd:
>>     >     >         >     2017-09-26T09:51:57Z|00059|netdev_dpdk|ERR|Interface dpdk0(rxq:1
>>     >     >         >     txq:2) configure error: Operation not supported
>>     >     >         >
>>     >     >         >
>>     >     >         >
>>     >     >         >     the full log is shown below.
>>     >     >         >
>>     >     >         >     Sep 26 17:51:26 plocalhost ovs-vswitchd:
>>     >     >         >     2017-09-26T09:51:26Z|00051|ofproto_dpif|INFO|netdev at ovs-netdev:
>>     >     >         >     Datapath supports ct_orig_tuple6
>>     >     >         >     Sep 26 17:51:26 plocalhost ovs-vswitchd[2624]:
>>     >     >         >     ovs|00039|ofproto_dpif|INFO|netdev at ovs-netdev: MPLS label stack length
>>     >     >         >     probed as 3
>>     >     >         >     Sep 26 17:51:26 plocalhost ovs-vswitchd[2624]:
>>     >     >         >     ovs|00040|ofproto_dpif|INFO|netdev at ovs-netdev: Datapath supports
>>     >     >         >     truncate action
>>     >     >         >     Sep 26 17:51:26 plocalhost ovs-vswitchd[2624]:
>>     >     >         >     ovs|00041|ofproto_dpif|INFO|netdev at ovs-netdev: Datapath supports
>>     >     >         >     unique flow ids
>>     >     >         >     Sep 26 17:51:26 plocalhost ovs-vswitchd[2624]:
>>     >     >         >     ovs|00042|ofproto_dpif|INFO|netdev at ovs-netdev: Datapath supports clone
>>     >     >         >     action
>>     >     >         >     Sep 26 17:51:26 plocalhost ovs-vswitchd[2624]:
>>     >     >         >     ovs|00043|ofproto_dpif|INFO|netdev at ovs-netdev: Max sample nesting
>>     >     >         >     level probed as 10
>>     >     >         >     Sep 26 17:51:26 plocalhost ovs-vswitchd[2624]:
>>     >     >         >     ovs|00044|ofproto_dpif|INFO|netdev at ovs-netdev: Datapath supports
>>     >     >         >     eventmask in conntrack action
>>     >     >         >     Sep 26 17:51:26 plocalhost ovs-vswitchd[2624]:
>>     >     >         >     ovs|00045|ofproto_dpif|INFO|netdev at ovs-netdev: Datapath supports
>>     >     >         >     ct_state
>>     >     >         >     Sep 26 17:51:26 plocalhost ovs-vswitchd[2624]:
>>     >     >         >     ovs|00046|ofproto_dpif|INFO|netdev at ovs-netdev: Datapath supports
>>     >     >         >     ct_zone
>>     >     >         >     Sep 26 17:51:26 plocalhost ovs-vswitchd[2624]:
>>     >     >         >     ovs|00047|ofproto_dpif|INFO|netdev at ovs-netdev: Datapath supports
>>     >     >         >     ct_mark
>>     >     >         >     Sep 26 17:51:26 plocalhost ovs-vswitchd[2624]:
>>     >     >         >     ovs|00048|ofproto_dpif|INFO|netdev at ovs-netdev: Datapath supports
>>     >     >         >     ct_label
>>     >     >         >     Sep 26 17:51:26 plocalhost ovs-vswitchd[2624]:
>>     >     >         >     ovs|00049|ofproto_dpif|INFO|netdev at ovs-netdev: Datapath supports
>>     >     >         >     ct_state_nat
>>     >     >         >     Sep 26 17:51:26 plocalhost ovs-vswitchd[2624]:
>>     >     >         >     ovs|00050|ofproto_dpif|INFO|netdev at ovs-netdev: Datapath supports
>>     >     >         >     ct_orig_tuple
>>     >     >         >     Sep 26 17:51:26 plocalhost ovs-vswitchd[2624]:
>>     >     >         >     ovs|00051|ofproto_dpif|INFO|netdev at ovs-netdev: Datapath supports
>>     >     >         >     ct_orig_tuple6
>>     >     >         >     Sep 26 17:51:26 plocalhost ovs-vswitchd:
>>     >     >         >     2017-09-26T09:51:26Z|00052|bridge|INFO|bridge c1: added interface c1
>>     >     >         >     on port 65534
>>     >     >         >     Sep 26 17:51:26 plocalhost ovs-vswitchd[2624]:
>>     >     >         >     ovs|00052|bridge|INFO|bridge c1: added interface c1 on port 65534
>>     >     >         >     Sep 26 17:51:26 plocalhost kernel: device c1 entered promiscuous mode
>>     >     >         >     Sep 26 17:51:26 plocalhost ovs-vswitchd:
>>     >     >         >     2017-09-26T09:51:26Z|00053|bridge|INFO|bridge c1: using datapath ID
>>     >     >         >     000076ff8ff75d4e
>>     >     >         >     Sep 26 17:51:26 plocalhost ovs-vswitchd[2624]:
>>     >     >         >     ovs|00053|bridge|INFO|bridge c1: using datapath ID 000076ff8ff75d4e
>>     >     >         >     Sep 26 17:51:26 plocalhost ovs-vswitchd:
>>     >     >         >     2017-09-26T09:51:26Z|00054|connmgr|INFO|c1: added service controller
>>     >     >         >     "punix:/usr/local/var/run/openvswitch/c1.mgmt"
>>     >     >         >     Sep 26 17:51:26 plocalhost ovs-vswitchd[2624]:
>>     >     >         >     ovs|00054|connmgr|INFO|c1: added service controller
>>     >     >         >     "punix:/usr/local/var/run/openvswitch/c1.mgmt"
>>     >     >         >     Sep 26 17:51:57 plocalhost ovs-vsctl: ovs|00001|vsctl|INFO|Called as
>>     >     >         >     ovs-vsctl add-port c1 dpdk0 -- set Interface dpdk0 type=dpdk
>>     >     >         >     options:dpdk-devargs=0000:03:00.0
>>     >     >         >     Sep 26 17:51:57 plocalhost ovs-vswitchd:
>>     >     >         >     2017-09-26T09:51:57Z|00055|dpif_netdev|INFO|PMD thread on numa_id: 0,
>>     >     >         >     core id:  0 created.
>>     >     >         >     Sep 26 17:51:57 plocalhost ovs-vswitchd[2624]:
>>     >     >         >     ovs|00055|dpif_netdev|INFO|PMD thread on numa_id: 0, core id:  0
>>     >     >         >     created.
>>     >     >         >     Sep 26 17:51:57 plocalhost ovs-vswitchd:
>>     >     >         >     2017-09-26T09:51:57Z|00056|dpif_netdev|INFO|There are 1 pmd threads on
>>     >     >         >     numa node 0
>>     >     >         >     Sep 26 17:51:57 plocalhost ovs-vswitchd[2624]:
>>     >     >         >     ovs|00056|dpif_netdev|INFO|There are 1 pmd threads on numa node 0
>>     >     >         >     Sep 26 17:51:57 plocalhost ovs-vswitchd[2624]:
>>     >     >         >     ovs|00057|netdev_dpdk|WARN|Rx checksum offload is not supported on
>>     >     >         >     port 0
>>     >     >         >     Sep 26 17:51:57 plocalhost ovs-vswitchd:
>>     >     >         >     2017-09-26T09:51:57Z|00057|netdev_dpdk|WARN|Rx checksum offload is not
>>     >     >         >     supported on port 0
>>     >     >         >     Sep 26 17:51:57 plocalhost ovs-vswitchd:
>>     >     >         >     2017-09-26T09:51:57Z|00058|netdev_dpdk|ERR|Interface dpdk0 MTU (1500)
>>     >     >         >     setup error: Operation not supported
>>     >     >         >     Sep 26 17:51:57 plocalhost ovs-vswitchd:
>>     >     >         >     2017-09-26T09:51:57Z|00059|netdev_dpdk|ERR|Interface dpdk0(rxq:1
>>     >     >         >     txq:2) configure error: Operation not supported
>>     >     >         >     Sep 26 17:51:57 plocalhost ovs-vswitchd:
>>     >     >         >     2017-09-26T09:51:57Z|00060|bridge|INFO|bridge c1: added interface
>>     >     >         >     dpdk0 on port 1
>>     >     >         >     Sep 26 17:51:57 plocalhost kernel: pmd12[2671]: segfault at 64 ip
>>     >     >         >     000000000067fa98 sp 00007f68faffc680 error 4 in
>>     >     >         >     ovs-vswitchd[400000+566000]
>>     >     >         >     Sep 26 17:51:57 plocalhost ovs-vswitchd[2624]:
>>     >     >         >     ovs|00058|netdev_dpdk|ERR|Interface dpdk0 MTU (1500) setup error:
>>     >     >         >     Operation not supported
>>     >     >         >     Sep 26 17:51:57 plocalhost ovs-vswitchd[2624]:
>>     >     >         >     ovs|00059|netdev_dpdk|ERR|Interface dpdk0(rxq:1 txq:2) configure
>>     >     >         >     error: Operation not supported
>>     >     >         >     Sep 26 17:51:57 plocalhost ovs-vswitchd[2624]:
>>     >     >         >     ovs|00060|bridge|INFO|bridge c1: added interface dpdk0 on port 1
>>     >     >         >     Sep 26 17:51:57 plocalhost abrt-hook-ccpp: Process 2624 (ovs-vswitchd)
>>     >     >         >     of user 0 killed by SIGSEGV - dumping core
>>     >     >         >     Sep 26 17:51:58 plocalhost abrt-hook-ccpp: Failed to create
>>     >     >         >     core_backtrace: waitpid failed: No child processes
>>     >     >         >     Sep 26 17:51:58 plocalhost systemd: ovs-vswitchd.service: main process
>>     >     >         >     exited, code=killed, status=11/SEGV
>>     >     >         >     Sep 26 17:51:58 plocalhost systemd: Unit ovs-vswitchd.service entered
>>     >     >         >     failed state.
>>     >     >         >     Sep 26 17:51:58 plocalhost systemd: ovs-vswitchd.service failed.
>>     >     >         >     Sep 26 17:51:58 plocalhost abrt-server: Executable
>>     >     >         >     '/usr/local/sbin/ovs-vswitchd' doesn't belong to any package and
>>     >     >         >     ProcessUnpackaged is set to 'no'
>>     >     >         >     Sep 26 17:51:58 plocalhost abrt-server: 'post-create' on
>>     >     >         >     '/var/spool/abrt/ccpp-2017-09-26-17:51:57-2624' exited with 1
>>     >     >         >     Sep 26 17:51:58 plocalhost abrt-server: Deleting problem directory
>>     >     >         >     '/var/spool/abrt/ccpp-2017-09-26-17:51:57-2624'
>>     >     >         >     _______________________________________________
>>     >     >         >     discuss mailing list
>>     >     >         >     discuss at openvswitch.org
>>     >     >         >     https://urldefense.proofpoint.com/v2/url?u=https-3A__mail.openvswitch.org_mailman_listinfo_ovs-2Ddiscuss&d=DwICAg&c=uilaK90D4TOVoH58JNXRgQ&r=BVhFA09CGX7JQ5Ih-uZnsw&m=2ENy74DF0JnxWLAbr7vzqlwv0w8VSbHYYUr8m_jjZWo&s=CNE9tw6xFFbCyR9WIkQji5V2z3PKmxUAvMJE3XYFB9M&e=
>>     >     >         >
>>     >     >         >
>>     >     >
>>     >     >
>>     >     >     _______________________________________________
>>     >     >     discuss mailing list
>>     >     >     discuss at openvswitch.org
>>     >     >     https://urldefense.proofpoint.com/v2/url?u=https-3A__mail.openvswitch.org_mailman_listinfo_ovs-2Ddiscuss&d=DwICAg&c=uilaK90D4TOVoH58JNXRgQ&r=BVhFA09CGX7JQ5Ih-uZnsw&m=4bWuoQ4yd7fQpTSNdOsZvgkOhKiApZMy-5WcC81rxNs&s=qIQVrahn2vPw5tVgYq3jEyUMayoxLlorgBPMbbuBdqM&e=
>>     >     >
>>     >     >
>>     >
>>     >
>>
>>


More information about the discuss mailing list