<div dir="ltr"><br>cc to ovs-discuss<br><div class="gmail_quote"><div dir="ltr" class="gmail_attr">---------- Forwarded message ---------<br>发件人: <strong class="gmail_sendername" dir="auto">shuangyang qian</strong> <span dir="auto">&lt;<a href="mailto:qsyqian@gmail.com">qsyqian@gmail.com</a>&gt;</span><br>Date: 2019年11月5日周二 下午6:12<br>Subject: Re: [ovs-discuss] the network performence is not normal when use openvswitch.ko make from ovs tree<br>To: Tonghao Zhang &lt;<a href="mailto:xiangxia.m.yue@gmail.com">xiangxia.m.yue@gmail.com</a>&gt;<br></div><br><br><div dir="ltr">thank you for your reply, i just change my kernel version as same as you and do the steps you provide, and get the same result which i metioned at first. The process is like below.<div>on node1:</div><div># ovs-vsctl show<br>4f4b936e-ddb9-4fc6-b0aa-6eb6034d4671<br>    Bridge br-int<br>        Port br-int<br>            Interface br-int<br>                type: internal<br>        Port &quot;gnv0&quot;<br>            Interface &quot;gnv0&quot;<br>                type: geneve<br>                options: {csum=&quot;true&quot;, key=&quot;100&quot;, remote_ip=&quot;10.18.124.2&quot;}<br>        Port &quot;veth-vm1&quot;<br>            Interface &quot;veth-vm1&quot;<br>    ovs_version: &quot;2.12.0&quot;<br></div><div># ip netns exec vm1 ip a<br>1: lo: &lt;LOOPBACK&gt; mtu 65536 qdisc noop state DOWN group default qlen 1000<br>    link/loopback 00:00:00:00:00:00 brd 00:00:00:00:00:00<br>2: ovs-gretap0@NONE: &lt;BROADCAST,MULTICAST&gt; mtu 1462 qdisc noop state DOWN group default qlen 1000<br>    link/ether 00:00:00:00:00:00 brd ff:ff:ff:ff:ff:ff<br>3: erspan0@NONE: &lt;BROADCAST,MULTICAST&gt; mtu 1450 qdisc noop state DOWN group default qlen 1000<br>    link/ether 32:d9:4f:86:c3:58 brd ff:ff:ff:ff:ff:ff<br>4: ovs-ip6gre0@NONE: &lt;NOARP&gt; mtu 1448 qdisc noop state DOWN group default qlen 1000<br>    link/gre6 00:00:00:00:00:00:00:00:00:00:00:00:00:00:00:00 brd 00:00:00:00:00:00:00:00:00:00:00:00:00:00:00:00<br>5: ovs-ip6tnl0@NONE: &lt;NOARP&gt; mtu 1452 qdisc noop state DOWN group default qlen 1000<br>    link/tunnel6 :: brd ::<br>19: vm1-eth0@if18: &lt;BROADCAST,MULTICAST,UP,LOWER_UP&gt; mtu 1500 qdisc noqueue state UP group default qlen 1000<br>    link/ether 32:4b:51:e2:2b:f4 brd ff:ff:ff:ff:ff:ff link-netnsid 0<br>    inet <a href="http://192.168.100.10/24" target="_blank">192.168.100.10/24</a> scope global vm1-eth0<br>       valid_lft forever preferred_lft forever<br>    inet6 fe80::304b:51ff:fee2:2bf4/64 scope link <br>       valid_lft forever preferred_lft forever<br></div><div><br></div><div>on node2:</div><div># ovs-vsctl show<br>53df6c21-c210-4c2c-a7ab-b1edb0df4a31<br>    Bridge br-int<br>        Port &quot;veth-vm2&quot;<br>            Interface &quot;veth-vm2&quot;<br>        Port &quot;gnv0&quot;<br>            Interface &quot;gnv0&quot;<br>                type: geneve<br>                options: {csum=&quot;true&quot;, key=&quot;100&quot;, remote_ip=&quot;10.18.124.1&quot;}<br>        Port br-int<br>            Interface br-int<br>                type: internal<br>    ovs_version: &quot;2.12.0&quot;<br></div><div># ip netns exec vm2 ip a<br>1: lo: &lt;LOOPBACK&gt; mtu 65536 qdisc noop state DOWN group default qlen 1000<br>    link/loopback 00:00:00:00:00:00 brd 00:00:00:00:00:00<br>2: ovs-gretap0@NONE: &lt;BROADCAST,MULTICAST&gt; mtu 1462 qdisc noop state DOWN group default qlen 1000<br>    link/ether 00:00:00:00:00:00 brd ff:ff:ff:ff:ff:ff<br>3: erspan0@NONE: &lt;BROADCAST,MULTICAST&gt; mtu 1450 qdisc noop state DOWN group default qlen 1000<br>    link/ether 8e:90:3e:95:1b:dd brd ff:ff:ff:ff:ff:ff<br>4: ovs-ip6gre0@NONE: &lt;NOARP&gt; mtu 1448 qdisc noop state DOWN group default qlen 1000<br>    link/gre6 00:00:00:00:00:00:00:00:00:00:00:00:00:00:00:00 brd 00:00:00:00:00:00:00:00:00:00:00:00:00:00:00:00<br>5: ovs-ip6tnl0@NONE: &lt;NOARP&gt; mtu 1452 qdisc noop state DOWN group default qlen 1000<br>    link/tunnel6 :: brd ::<br>11: vm2-eth0@if10: &lt;BROADCAST,MULTICAST,UP,LOWER_UP&gt; mtu 1500 qdisc noqueue state UP group default qlen 1000<br>    link/ether ee:e4:3e:16:6f:66 brd ff:ff:ff:ff:ff:ff link-netnsid 0<br>    inet <a href="http://192.168.100.20/24" target="_blank">192.168.100.20/24</a> scope global vm2-eth0<br>       valid_lft forever preferred_lft forever<br>    inet6 fe80::ece4:3eff:fe16:6f66/64 scope link <br>       valid_lft forever preferred_lft forever<br></div><div><br></div><div>and in network namespace vm1 on node1 i start iperf3 as server:</div><div># ip netns exec vm1 iperf3 -s<br></div><div><br></div><div>and in network namespace vm2 on noed2 i start iperf3 as client:</div><div># ip netns exec vm2 iperf3 -c 192.168.100.10 -i 2 -t 10<br>Connecting to host 192.168.100.10, port 5201<br>[  4] local 192.168.100.20 port 35258 connected to 192.168.100.10 port 5201<br>[ ID] Interval           Transfer     Bandwidth       Retr  Cwnd<br>[  4]   0.00-2.00   sec   494 MBytes  2.07 Gbits/sec  151    952 KBytes       <br>[  4]   2.00-4.00   sec   582 MBytes  2.44 Gbits/sec    3   1007 KBytes       <br>[  4]   4.00-6.00   sec   639 MBytes  2.68 Gbits/sec    0   1.36 MBytes       <br>[  4]   6.00-8.00   sec   618 MBytes  2.59 Gbits/sec    0   1.64 MBytes       <br>[  4]   8.00-10.00  sec   614 MBytes  2.57 Gbits/sec    0   1.88 MBytes       <br>- - - - - - - - - - - - - - - - - - - - - - - - -<br>[ ID] Interval           Transfer     Bandwidth       Retr<br>[  4]   0.00-10.00  sec  2.88 GBytes  2.47 Gbits/sec  154             sender<br>[  4]   0.00-10.00  sec  2.88 GBytes  2.47 Gbits/sec                  receiver<br><br>iperf Done.<br></div><div><br></div><div>the openvswitch.ko in both two nodes is:</div><div># modinfo openvswitch<br>filename:       /lib/modules/3.10.0-957.el7.x86_64/extra/openvswitch/openvswitch.ko<br>alias:          net-pf-16-proto-16-family-ovs_ct_limit<br>alias:          net-pf-16-proto-16-family-ovs_meter<br>alias:          net-pf-16-proto-16-family-ovs_packet<br>alias:          net-pf-16-proto-16-family-ovs_flow<br>alias:          net-pf-16-proto-16-family-ovs_vport<br>alias:          net-pf-16-proto-16-family-ovs_datapath<br>version:        2.12.0<br>license:        GPL<br>description:    Open vSwitch switching datapath<br>retpoline:      Y<br>rhelversion:    7.6<br>srcversion:     764C8BD051B3182DE71CF29<br>depends:        nf_conntrack,tunnel6,nf_nat,nf_defrag_ipv6,libcrc32c,nf_nat_ipv6,nf_nat_ipv4<br>vermagic:       3.10.0-957.el7.x86_64 SMP mod_unload modversions <br>parm:           udp_port:Destination UDP port (ushort)<br></div><div><br></div><div>aslo, i uninstall openvswitch-kmod rpm package and use openvswitch.ko in linux kernel, like:</div><div># modinfo openvswitch<br>filename:       /lib/modules/3.10.0-957.el7.x86_64/kernel/net/openvswitch/openvswitch.ko.xz<br>alias:          net-pf-16-proto-16-family-ovs_packet<br>alias:          net-pf-16-proto-16-family-ovs_flow<br>alias:          net-pf-16-proto-16-family-ovs_vport<br>alias:          net-pf-16-proto-16-family-ovs_datapath<br>license:        GPL<br>description:    Open vSwitch switching datapath<br>retpoline:      Y<br>rhelversion:    7.6<br>srcversion:     6FE05FC439FA9CE7E264684<br>depends:        nf_conntrack,nf_nat,libcrc32c,nf_nat_ipv6,nf_nat_ipv4,nf_defrag_ipv6<br>intree:         Y<br>vermagic:       3.10.0-957.el7.x86_64 SMP mod_unload modversions <br>signer:         CentOS Linux kernel signing key<br>sig_key:        B7:0D:CF:0D:F2:D9:B7:F2:91:59:24:82:49:FD:6F:E8:7B:78:14:27<br>sig_hashalgo:   sha256<br></div><div><br></div><div>and get the bw is:</div><div># ip netns exec vm2 iperf3 -c 192.168.100.10 -i 2 -t 10<br>Connecting to host 192.168.100.10, port 5201<br>[  4] local 192.168.100.20 port 35270 connected to 192.168.100.10 port 5201<br>[ ID] Interval           Transfer     Bandwidth       Retr  Cwnd<br>[  4]   0.00-2.00   sec  1.61 GBytes  6.92 Gbits/sec  2793    877 KBytes       <br>[  4]   2.00-4.00   sec  1.56 GBytes  6.70 Gbits/sec  7773    907 KBytes       <br>[  4]   4.00-6.00   sec  1.78 GBytes  7.62 Gbits/sec  4387    952 KBytes       <br>[  4]   6.00-8.00   sec  1.66 GBytes  7.11 Gbits/sec  9365    815 KBytes       <br>[  4]   8.00-10.00  sec  1.68 GBytes  7.20 Gbits/sec  2421    554 KBytes       <br>- - - - - - - - - - - - - - - - - - - - - - - - -<br>[ ID] Interval           Transfer     Bandwidth       Retr<br>[  4]   0.00-10.00  sec  8.28 GBytes  7.11 Gbits/sec  26739             sender<br>[  4]   0.00-10.00  sec  8.28 GBytes  7.11 Gbits/sec                  receiver<br><br>iperf Done.<br></div><div><br></div><div>so, the performence also is not normal when use openvswitch.ko installed from openvswitch-kmod rpm package.</div><div><br></div><div>can you show me your build process for openvswith-*.rpm package or give me some link? or your process of install ovs?</div><div><br></div><div>i don&#39;t know where is wrong.</div><div><br></div><div>thx.</div></div><br><div class="gmail_quote"><div dir="ltr" class="gmail_attr">Tonghao Zhang &lt;<a href="mailto:xiangxia.m.yue@gmail.com" target="_blank">xiangxia.m.yue@gmail.com</a>&gt; 于2019年11月5日周二 下午3:59写道:<br></div><blockquote class="gmail_quote" style="margin:0px 0px 0px 0.8ex;border-left:1px solid rgb(204,204,204);padding-left:1ex">On Mon, Nov 4, 2019 at 5:14 PM shuangyang qian &lt;<a href="mailto:qsyqian@gmail.com" target="_blank">qsyqian@gmail.com</a>&gt; wrote:<br>
&gt;<br>
&gt; Hi:<br>
&gt; I make rpm packages for ovs and ovn with this document:<a href="http://docs.openvswitch.org/en/latest/intro/install/fedora/" rel="noreferrer" target="_blank">http://docs.openvswitch.org/en/latest/intro/install/fedora/</a> . For use the kernel module in ovs tree, i configure with the command: ./configure --with-linux=/lib/modules/$(uname -r)/build .<br>
&gt; Then install the rpm packages.<br>
&gt; when i finished, i check the openvswitch.ko is like:<br>
&gt; # lsmod |  grep openvswitch<br>
&gt; openvswitch           291276  0<br>
&gt; tunnel6                 3115  1 openvswitch<br>
&gt; nf_defrag_ipv6         25957  2 nf_conntrack_ipv6,openvswitch<br>
&gt; nf_nat_ipv6             6459  2 openvswitch,ip6table_nat<br>
&gt; nf_nat_ipv4             6187  2 openvswitch,iptable_nat<br>
&gt; nf_nat                 18080  5 xt_nat,openvswitch,nf_nat_ipv6,nf_nat_masquerade_ipv4,nf_nat_ipv4<br>
&gt; nf_conntrack          102766  10 ip_vs,nf_conntrack_ipv6,openvswitch,nf_conntrack_ipv4,nf_conntrack_netlink,nf_nat_ipv6,nf_nat_masquerade_ipv4,xt_conntrack,nf_nat_ipv4,nf_nat<br>
&gt; libcrc32c               1388  3 ip_vs,openvswitch,xfs<br>
&gt; ipv6                  400397  92 ip_vs,nf_conntrack_ipv6,openvswitch,nf_defrag_ipv6,nf_nat_ipv6,bridge<br>
&gt; # modinfo openvswitch<br>
&gt; filename:       /lib/modules/4.9.18-19080201/extra/openvswitch/openvswitch.ko<br>
&gt; alias:          net-pf-16-proto-16-family-ovs_ct_limit<br>
&gt; alias:          net-pf-16-proto-16-family-ovs_meter<br>
&gt; alias:          net-pf-16-proto-16-family-ovs_packet<br>
&gt; alias:          net-pf-16-proto-16-family-ovs_flow<br>
&gt; alias:          net-pf-16-proto-16-family-ovs_vport<br>
&gt; alias:          net-pf-16-proto-16-family-ovs_datapath<br>
&gt; version:        2.11.2<br>
&gt; license:        GPL<br>
&gt; description:    Open vSwitch switching datapath<br>
&gt; srcversion:     9DDA327F9DD46B9813628A4<br>
&gt; depends:        nf_conntrack,tunnel6,ipv6,nf_nat,nf_defrag_ipv6,libcrc32c,nf_nat_ipv6,nf_nat_ipv4<br>
&gt; vermagic:       4.9.18-19080201 SMP mod_unload modversions<br>
&gt; parm:           udp_port:Destination UDP port (ushort)<br>
&gt; # rpm -qf /lib/modules/4.9.18-19080201/extra/openvswitch/openvswitch.ko<br>
&gt; openvswitch-kmod-2.11.2-1.el7.x86_64<br>
&gt;<br>
&gt; Then i start to build my network structure. I have two node,and network namespace vm1 on node1, network namespace vm2 on node2. vm1&#39;s veth pair veth-vm1 is on node1&#39;s br-int. vm2&#39;s veth pair veth-vm2 is on node2&#39;s br-int. In logical layer, there is one logical switch test-subnet and two logical switch port node1 and node2 on it. like this:<br>
&gt; # ovn-nbctl show<br>
&gt; switch 70585c0e-3cd9-459e-9448-3c13f3c0bfa3 (test-subnet)<br>
&gt;     port node2<br>
&gt;         addresses: [&quot;00:00:00:00:00:02 192.168.100.20&quot;]<br>
&gt;     port node1<br>
&gt;         addresses: [&quot;00:00:00:00:00:01 192.168.100.10&quot;]<br>
&gt; on node1:<br>
&gt; # ovs-vsctl show<br>
&gt; 5180f74a-1379-49af-b265-4403bd0d82d8<br>
&gt;     Bridge br-int<br>
&gt;         fail_mode: secure<br>
&gt;         Port &quot;ovn-431b9e-0&quot;<br>
&gt;             Interface &quot;ovn-431b9e-0&quot;<br>
&gt;                 type: geneve<br>
&gt;                 options: {csum=&quot;true&quot;, key=flow, remote_ip=&quot;10.18.124.2&quot;}<br>
&gt;         Port br-int<br>
&gt;             Interface br-int<br>
&gt;                 type: internal<br>
&gt;         Port &quot;veth-vm1&quot;<br>
&gt;             Interface &quot;veth-vm1&quot;<br>
&gt;     ovs_version: &quot;2.11.2&quot;<br>
&gt; # ip netns exec vm1 ip a<br>
&gt; 1: lo: &lt;LOOPBACK,UP,LOWER_UP&gt; mtu 65536 qdisc noqueue state UNKNOWN group default qlen 1<br>
&gt;     link/loopback 00:00:00:00:00:00 brd 00:00:00:00:00:00<br>
&gt;     inet <a href="http://127.0.0.1/8" rel="noreferrer" target="_blank">127.0.0.1/8</a> scope host lo<br>
&gt;        valid_lft forever preferred_lft forever<br>
&gt;     inet6 ::1/128 scope host<br>
&gt;        valid_lft forever preferred_lft forever<br>
&gt; 14: ovs-gretap0@NONE: &lt;BROADCAST,MULTICAST&gt; mtu 1462 qdisc noop state DOWN group default qlen 1000<br>
&gt;     link/ether 00:00:00:00:00:00 brd ff:ff:ff:ff:ff:ff<br>
&gt; 15: erspan0@NONE: &lt;BROADCAST,MULTICAST&gt; mtu 1450 qdisc noop state DOWN group default qlen 1000<br>
&gt;     link/ether 22:02:1b:08:ec:53 brd ff:ff:ff:ff:ff:ff<br>
&gt; 16: ovs-ip6gre0@NONE: &lt;NOARP&gt; mtu 1448 qdisc noop state DOWN group default qlen 1<br>
&gt;     link/gre6 00:00:00:00:00:00:00:00:00:00:00:00:00:00:00:00 brd 00:00:00:00:00:00:00:00:00:00:00:00:00:00:00:00<br>
&gt; 17: ovs-ip6tnl0@NONE: &lt;NOARP&gt; mtu 1452 qdisc noop state DOWN group default qlen 1<br>
&gt;     link/tunnel6 :: brd ::<br>
&gt; 18: vm1-eth0@if17: &lt;BROADCAST,MULTICAST,UP,LOWER_UP&gt; mtu 1400 qdisc noqueue state UP group default qlen 1000<br>
&gt;     link/ether 00:00:00:00:00:01 brd ff:ff:ff:ff:ff:ff link-netnsid 0<br>
&gt;     inet <a href="http://192.168.100.10/24" rel="noreferrer" target="_blank">192.168.100.10/24</a> scope global vm1-eth0<br>
&gt;        valid_lft forever preferred_lft forever<br>
&gt;     inet6 fe80::200:ff:fe00:1/64 scope link<br>
&gt;        valid_lft forever preferred_lft forever<br>
&gt;<br>
&gt;<br>
&gt; on node2:# ovs-vsctl show<br>
&gt; 011332d0-78bc-47f7-be3c-fab0beb08e28<br>
&gt;     Bridge br-int<br>
&gt;         fail_mode: secure<br>
&gt;         Port br-int<br>
&gt;             Interface br-int<br>
&gt;                 type: internal<br>
&gt;         Port &quot;ovn-c655f8-0&quot;<br>
&gt;             Interface &quot;ovn-c655f8-0&quot;<br>
&gt;                 type: geneve<br>
&gt;                 options: {csum=&quot;true&quot;, key=flow, remote_ip=&quot;10.18.124.1&quot;}<br>
&gt;         Port &quot;veth-vm2&quot;<br>
&gt;             Interface &quot;veth-vm2&quot;<br>
&gt;     ovs_version: &quot;2.11.2&quot;<br>
&gt; #ip netns exec vm2 ip a<br>
&gt; 1: lo: &lt;LOOPBACK,UP,LOWER_UP&gt; mtu 65536 qdisc noqueue state UNKNOWN group default qlen 1<br>
&gt;     link/loopback 00:00:00:00:00:00 brd 00:00:00:00:00:00<br>
&gt;     inet <a href="http://127.0.0.1/8" rel="noreferrer" target="_blank">127.0.0.1/8</a> scope host lo<br>
&gt;        valid_lft forever preferred_lft forever<br>
&gt;     inet6 ::1/128 scope host<br>
&gt;        valid_lft forever preferred_lft forever<br>
&gt; 10: ovs-gretap0@NONE: &lt;BROADCAST,MULTICAST&gt; mtu 1462 qdisc noop state DOWN group default qlen 1000<br>
&gt;     link/ether 00:00:00:00:00:00 brd ff:ff:ff:ff:ff:ff<br>
&gt; 11: erspan0@NONE: &lt;BROADCAST,MULTICAST&gt; mtu 1450 qdisc noop state DOWN group default qlen 1000<br>
&gt;     link/ether 4a:1d:ca:65:e3:ca brd ff:ff:ff:ff:ff:ff<br>
&gt; 12: ovs-ip6gre0@NONE: &lt;NOARP&gt; mtu 1448 qdisc noop state DOWN group default qlen 1<br>
&gt;     link/gre6 00:00:00:00:00:00:00:00:00:00:00:00:00:00:00:00 brd 00:00:00:00:00:00:00:00:00:00:00:00:00:00:00:00<br>
&gt; 13: ovs-ip6tnl0@NONE: &lt;NOARP&gt; mtu 1452 qdisc noop state DOWN group default qlen 1<br>
&gt;     link/tunnel6 :: brd ::<br>
&gt; 17: vm2-eth0@if16: &lt;BROADCAST,MULTICAST,UP,LOWER_UP&gt; mtu 1400 qdisc noqueue state UP group default qlen 1000<br>
&gt;     link/ether 00:00:00:00:00:02 brd ff:ff:ff:ff:ff:ff link-netnsid 0<br>
&gt;     inet <a href="http://192.168.100.20/24" rel="noreferrer" target="_blank">192.168.100.20/24</a> scope global vm2-eth0<br>
&gt;        valid_lft forever preferred_lft forever<br>
&gt;     inet6 fe80::200:ff:fe00:2/64 scope link<br>
&gt;        valid_lft forever preferred_lft forever<br>
&gt;<br>
&gt; then i start to use iperf to check the network performence, oh, by the way, i use geneve protocol between the two nodes, ovn-sbctl show is :<br>
&gt; # ovn-sbctl show<br>
&gt; Chassis &quot;c655f877-b7ed-4bb5-a047-23521426d541&quot;<br>
&gt;     hostname: &quot;<a href="http://node1.com" rel="noreferrer" target="_blank">node1.com</a>&quot;<br>
&gt;     Encap geneve<br>
&gt;         ip: &quot;10.18.124.1&quot;<br>
&gt;         options: {csum=&quot;true&quot;}<br>
&gt;     Port_Binding &quot;node1&quot;<br>
&gt; Chassis &quot;431b9efb-b464-42a1-a6dd-7fc6e0176137&quot;<br>
&gt;     hostname: &quot;<a href="http://node2.com" rel="noreferrer" target="_blank">node2.com</a>&quot;<br>
&gt;     Encap geneve<br>
&gt;         ip: &quot;10.18.124.2&quot;<br>
&gt;         options: {csum=&quot;true&quot;}<br>
&gt;     Port_Binding &quot;node2&quot;<br>
&gt;<br>
&gt; on node1, in network namespace vm1 i start the iperf3 as server:<br>
&gt; #ip netns exec vm1 iperf3 -s<br>
&gt; on node2, in network namespace vm2 i start the iper3 as client:<br>
&gt; # ip netns exec vm2 iperf3 -c 192.168.100.10<br>
&gt; Connecting to host 192.168.100.10, port 5201<br>
&gt; [  4] local 192.168.100.20 port 40708 connected to 192.168.100.10 port 5201<br>
&gt; [ ID] Interval           Transfer     Bandwidth       Retr  Cwnd<br>
&gt; [  4]   0.00-1.00   sec   431 MBytes  3.61 Gbits/sec   34    253 KBytes<br>
&gt; [  4]   1.00-2.00   sec   426 MBytes  3.58 Gbits/sec    0    253 KBytes<br>
&gt; [  4]   2.00-3.00   sec   426 MBytes  3.57 Gbits/sec    0    253 KBytes<br>
&gt; [  4]   3.00-4.00   sec   401 MBytes  3.37 Gbits/sec    0    255 KBytes<br>
&gt; [  4]   4.00-5.00   sec   429 MBytes  3.60 Gbits/sec    0    255 KBytes<br>
&gt; [  4]   5.00-6.00   sec   413 MBytes  3.46 Gbits/sec    0    253 KBytes<br>
&gt; [  4]   6.00-7.00   sec   409 MBytes  3.43 Gbits/sec    0    250 KBytes<br>
&gt; [  4]   7.00-8.00   sec   427 MBytes  3.58 Gbits/sec    0    253 KBytes<br>
&gt; [  4]   8.00-9.00   sec   417 MBytes  3.49 Gbits/sec    0    250 KBytes<br>
&gt; [  4]   9.00-10.00  sec   385 MBytes  3.23 Gbits/sec    0   5.27 KBytes<br>
&gt; - - - - - - - - - - - - - - - - - - - - - - - - -<br>
&gt; [ ID] Interval           Transfer     Bandwidth       Retr<br>
&gt; [  4]   0.00-10.00  sec  4.07 GBytes  3.49 Gbits/sec   34             sender<br>
&gt; [  4]   0.00-10.00  sec  4.07 GBytes  3.49 Gbits/sec                  receiver<br>
&gt;<br>
&gt; as you see, the bw is only 3.xxGbits/sec, but my physics eth1&#39;s bw 10000M:<br>
Hi, I run the ovs on node1 and node2, using geneve tunnel but I didn&#39;t<br>
reproduce your issue.<br>
<br>
create the geneve tunnel:<br>
# ovs-vsctl add-br br-int<br>
# ovs-vsctl add-port br-int gnv0 -- set Interface gnv0 type=geneve<br>
options:csum=true options:key=100 options:remote_ip=1.1.1.200<br>
<br>
# ovs-vsctl show<br>
9393485c-c64c-490e-884e-418ff5d90251<br>
    Bridge br-int<br>
        Port gnv0<br>
            Interface gnv0<br>
                type: geneve<br>
                options: {csum=&quot;true&quot;, key=&quot;100&quot;, remote_ip=&quot;1.1.1.200&quot;}<br>
        Port __tap01<br>
            Interface __tap01<br>
        Port br-int<br>
            Interface br-int<br>
                type: internal<br>
<br>
# ip netns exec ns100 ifconfig<br>
__tap00: flags=4163&lt;UP,BROADCAST,RUNNING,MULTICAST&gt;  mtu 1450<br>
        inet 2.2.2.100  netmask 255.255.255.0  broadcast 0.0.0.0<br>
        inet6 fe80::254:ff:fe00:1  prefixlen 64  scopeid 0x20&lt;link&gt;<br>
        ether 00:54:00:00:00:01  txqueuelen 1000  (Ethernet)<br>
        RX packets 605000  bytes 39951500 (38.1 MiB)<br>
        RX errors 0  dropped 0  overruns 0  frame 0<br>
        TX packets 819864  bytes 31247862764 (29.1 GiB)<br>
        TX errors 0  dropped 0 overruns 0  carrier 0  collisions 0<br>
<br>
# ip netns exec ns100 iperf -c 2.2.2.200 -i 2 -t 10<br>
------------------------------------------------------------<br>
Client connecting to 2.2.2.200, TCP port 5001<br>
TCP window size:  482 KByte (default)<br>
------------------------------------------------------------<br>
[  3] local 2.2.2.100 port 41428 connected with 2.2.2.200 port 5001<br>
[ ID] Interval       Transfer     Bandwidth<br>
[  3]  0.0- 2.0 sec  1.85 GBytes  7.93 Gbits/sec<br>
[  3]  2.0- 4.0 sec  1.94 GBytes  8.33 Gbits/sec<br>
<br>
# modinfo openvswitch<br>
filename:       /lib/modules/3.10.0-957.1.3.el7.x86_64/extra/openvswitch.ko<br>
<br>
so, can you use the commands as show above to reproduce your<br>
issue.(the kernel version is different.)<br>
&gt; # ethtool eth1<br>
&gt; Settings for eth1:<br>
&gt;         Supported ports: [ FIBRE ]<br>
&gt;         Supported link modes:   10000baseT/Full<br>
&gt;         Supported pause frame use: Symmetric<br>
&gt;         Supports auto-negotiation: No<br>
&gt;         Supported FEC modes: Not reported<br>
&gt;         Advertised link modes:  10000baseT/Full<br>
&gt;         Advertised pause frame use: Symmetric<br>
&gt;         Advertised auto-negotiation: No<br>
&gt;         Advertised FEC modes: Not reported<br>
&gt;         Speed: 10000Mb/s<br>
&gt;         Duplex: Full<br>
&gt;         Port: Other<br>
&gt;         PHYAD: 0<br>
&gt;         Transceiver: external<br>
&gt;         Auto-negotiation: off<br>
&gt;         Supports Wake-on: d<br>
&gt;         Wake-on: d<br>
&gt;         Current message level: 0x00000007 (7)<br>
&gt;                                drv probe link<br>
&gt;         Link detected: yes<br>
&gt;<br>
&gt; when i uninstall the openvswitch-kmod package and use the openvswitch.ko in the upstream linux kernel, like this:<br>
&gt; # lsmod | grep openvswitch<br>
&gt; openvswitch            95805  0<br>
&gt; nf_defrag_ipv6         25957  2 nf_conntrack_ipv6,openvswitch<br>
&gt; nf_nat_ipv6             6459  2 openvswitch,ip6table_nat<br>
&gt; nf_nat_ipv4             6187  2 openvswitch,iptable_nat<br>
&gt; nf_nat                 18080  5 xt_nat,openvswitch,nf_nat_ipv6,nf_nat_masquerade_ipv4,nf_nat_ipv4<br>
&gt; nf_conntrack          102766  10 ip_vs,nf_conntrack_ipv6,openvswitch,nf_conntrack_ipv4,nf_conntrack_netlink,nf_nat_ipv6,nf_nat_masquerade_ipv4,xt_conntrack,nf_nat_ipv4,nf_nat<br>
&gt; libcrc32c               1388  3 ip_vs,openvswitch,xfs<br>
&gt; # modinfo openvswitch<br>
&gt; filename:       /lib/modules/4.9.18-19080201/kernel/net/openvswitch/openvswitch.ko<br>
&gt; alias:          net-pf-16-proto-16-family-ovs_packet<br>
&gt; alias:          net-pf-16-proto-16-family-ovs_flow<br>
&gt; alias:          net-pf-16-proto-16-family-ovs_vport<br>
&gt; alias:          net-pf-16-proto-16-family-ovs_datapath<br>
&gt; license:        GPL<br>
&gt; description:    Open vSwitch switching datapath<br>
&gt; srcversion:     915B872C96FB1D38D107742<br>
&gt; depends:        nf_conntrack,nf_nat,libcrc32c,nf_nat_ipv6,nf_nat_ipv4,nf_defrag_ipv6<br>
&gt; intree:         Y<br>
&gt; vermagic:       4.9.18-19080201 SMP mod_unload modversions<br>
&gt;<br>
&gt; and do the same test in above, and i get the follow result:<br>
&gt; # ip netns exec vm2 iperf3 -c 192.168.100.10<br>
&gt; Connecting to host 192.168.100.10, port 5201<br>
&gt; [  4] local 192.168.100.20 port 40652 connected to 192.168.100.10 port 5201<br>
&gt; [ ID] Interval           Transfer     Bandwidth       Retr  Cwnd<br>
&gt; [  4]   0.00-1.00   sec  1000 MBytes  8.39 Gbits/sec    4    290 KBytes<br>
&gt; [  4]   1.00-2.00   sec   994 MBytes  8.34 Gbits/sec    0    292 KBytes<br>
&gt; [  4]   2.00-3.00   sec  1002 MBytes  8.41 Gbits/sec    0    287 KBytes<br>
&gt;    4]   3.00-4.00   sec   994 MBytes  8.34 Gbits/sec    0    292 KBytes<br>
&gt; ▽  4]   4.00-5.00   sec   992 MBytes  8.32 Gbits/sec    0    298 KBytes<br>
&gt; [  4]   5.00-6.00   sec   994 MBytes  8.34 Gbits/sec    0    305 KBytes<br>
&gt; [  4]   6.00-7.00   sec   989 MBytes  8.29 Gbits/sec    0    313 KBytes<br>
&gt; [  4]   7.00-8.00   sec   992 MBytes  8.32 Gbits/sec    0    290 KBytes<br>
&gt; [  4]   8.00-9.00   sec   996 MBytes  8.36 Gbits/sec    0    303 KBytes<br>
&gt; [  4]   9.00-10.00  sec   955 MBytes  8.01 Gbits/sec    0   5.27 KBytes<br>
&gt; - - - - - - - - - - - - - - - - - - - - - - - - -<br>
&gt; [ ID] Interval           Transfer     Bandwidth       Retr<br>
&gt; [  4]   0.00-10.00  sec  9.67 GBytes  8.31 Gbits/sec    4             sender<br>
&gt; [  4]   0.00-10.00  sec  9.67 GBytes  8.31 Gbits/sec                  receiver<br>
&gt;<br>
&gt; so, i cann&#39;t understand why the performence is so poor when i use the kernel module build from the ovs tree.<br>
&gt;<br>
&gt; anyone can give me some advice where is wrong?<br>
&gt;<br>
&gt; thx!<br>
&gt;<br>
&gt; _______________________________________________<br>
&gt; discuss mailing list<br>
&gt; <a href="mailto:discuss@openvswitch.org" target="_blank">discuss@openvswitch.org</a><br>
&gt; <a href="https://mail.openvswitch.org/mailman/listinfo/ovs-discuss" rel="noreferrer" target="_blank">https://mail.openvswitch.org/mailman/listinfo/ovs-discuss</a><br>
</blockquote></div>
</div></div>