[ovs-discuss] the network performence is not normal when use openvswitch.ko make from ovs tree
shuangyang qian
qsyqian at gmail.com
Mon Nov 4 09:12:56 UTC 2019
Hi:
I make rpm packages for ovs and ovn with this document:
http://docs.openvswitch.org/en/latest/intro/install/fedora/ . For use the
kernel module in ovs tree, i configure with the command: ./configure
--with-linux=/lib/modules/$(uname -r)/build .
Then install the rpm packages.
when i finished, i check the openvswitch.ko is like:
# lsmod | grep openvswitch
openvswitch 291276 0
tunnel6 3115 1 openvswitch
nf_defrag_ipv6 25957 2 nf_conntrack_ipv6,openvswitch
nf_nat_ipv6 6459 2 openvswitch,ip6table_nat
nf_nat_ipv4 6187 2 openvswitch,iptable_nat
nf_nat 18080 5
xt_nat,openvswitch,nf_nat_ipv6,nf_nat_masquerade_ipv4,nf_nat_ipv4
nf_conntrack 102766 10
ip_vs,nf_conntrack_ipv6,openvswitch,nf_conntrack_ipv4,nf_conntrack_netlink,nf_nat_ipv6,nf_nat_masquerade_ipv4,xt_conntrack,nf_nat_ipv4,nf_nat
libcrc32c 1388 3 ip_vs,openvswitch,xfs
ipv6 400397 92
ip_vs,nf_conntrack_ipv6,openvswitch,nf_defrag_ipv6,nf_nat_ipv6,bridge
# modinfo openvswitch
filename:
/lib/modules/4.9.18-19080201/extra/openvswitch/openvswitch.ko
alias: net-pf-16-proto-16-family-ovs_ct_limit
alias: net-pf-16-proto-16-family-ovs_meter
alias: net-pf-16-proto-16-family-ovs_packet
alias: net-pf-16-proto-16-family-ovs_flow
alias: net-pf-16-proto-16-family-ovs_vport
alias: net-pf-16-proto-16-family-ovs_datapath
version: 2.11.2
license: GPL
description: Open vSwitch switching datapath
srcversion: 9DDA327F9DD46B9813628A4
depends:
nf_conntrack,tunnel6,ipv6,nf_nat,nf_defrag_ipv6,libcrc32c,nf_nat_ipv6,nf_nat_ipv4
vermagic: 4.9.18-19080201 SMP mod_unload modversions
parm: udp_port:Destination UDP port (ushort)
# rpm -qf /lib/modules/4.9.18-19080201/extra/openvswitch/openvswitch.ko
openvswitch-kmod-2.11.2-1.el7.x86_64
Then i start to build my network structure. I have two node,and network
namespace vm1 on node1, network namespace vm2 on node2. vm1's veth pair
veth-vm1 is on node1's br-int. vm2's veth pair veth-vm2 is on node2's
br-int. In logical layer, there is one logical switch test-subnet and two
logical switch port node1 and node2 on it. like this:
# ovn-nbctl show
switch 70585c0e-3cd9-459e-9448-3c13f3c0bfa3 (test-subnet)
port node2
addresses: ["00:00:00:00:00:02 192.168.100.20"]
port node1
addresses: ["00:00:00:00:00:01 192.168.100.10"]
on node1:
# ovs-vsctl show
5180f74a-1379-49af-b265-4403bd0d82d8
Bridge br-int
fail_mode: secure
Port "ovn-431b9e-0"
Interface "ovn-431b9e-0"
type: geneve
options: {csum="true", key=flow, remote_ip="10.18.124.2"}
Port br-int
Interface br-int
type: internal
Port "veth-vm1"
Interface "veth-vm1"
ovs_version: "2.11.2"
# ip netns exec vm1 ip a
1: lo: <LOOPBACK,UP,LOWER_UP> mtu 65536 qdisc noqueue state UNKNOWN group
default qlen 1
link/loopback 00:00:00:00:00:00 brd 00:00:00:00:00:00
inet 127.0.0.1/8 scope host lo
valid_lft forever preferred_lft forever
inet6 ::1/128 scope host
valid_lft forever preferred_lft forever
14: ovs-gretap0 at NONE: <BROADCAST,MULTICAST> mtu 1462 qdisc noop state DOWN
group default qlen 1000
link/ether 00:00:00:00:00:00 brd ff:ff:ff:ff:ff:ff
15: erspan0 at NONE: <BROADCAST,MULTICAST> mtu 1450 qdisc noop state DOWN
group default qlen 1000
link/ether 22:02:1b:08:ec:53 brd ff:ff:ff:ff:ff:ff
16: ovs-ip6gre0 at NONE: <NOARP> mtu 1448 qdisc noop state DOWN group default
qlen 1
link/gre6 00:00:00:00:00:00:00:00:00:00:00:00:00:00:00:00 brd
00:00:00:00:00:00:00:00:00:00:00:00:00:00:00:00
17: ovs-ip6tnl0 at NONE: <NOARP> mtu 1452 qdisc noop state DOWN group default
qlen 1
link/tunnel6 :: brd ::
18: vm1-eth0 at if17: <BROADCAST,MULTICAST,UP,LOWER_UP> mtu 1400 qdisc noqueue
state UP group default qlen 1000
link/ether 00:00:00:00:00:01 brd ff:ff:ff:ff:ff:ff link-netnsid 0
inet 192.168.100.10/24 scope global vm1-eth0
valid_lft forever preferred_lft forever
inet6 fe80::200:ff:fe00:1/64 scope link
valid_lft forever preferred_lft forever
on node2:# ovs-vsctl show
011332d0-78bc-47f7-be3c-fab0beb08e28
Bridge br-int
fail_mode: secure
Port br-int
Interface br-int
type: internal
Port "ovn-c655f8-0"
Interface "ovn-c655f8-0"
type: geneve
options: {csum="true", key=flow, remote_ip="10.18.124.1"}
Port "veth-vm2"
Interface "veth-vm2"
ovs_version: "2.11.2"
#ip netns exec vm2 ip a
1: lo: <LOOPBACK,UP,LOWER_UP> mtu 65536 qdisc noqueue state UNKNOWN group
default qlen 1
link/loopback 00:00:00:00:00:00 brd 00:00:00:00:00:00
inet 127.0.0.1/8 scope host lo
valid_lft forever preferred_lft forever
inet6 ::1/128 scope host
valid_lft forever preferred_lft forever
10: ovs-gretap0 at NONE: <BROADCAST,MULTICAST> mtu 1462 qdisc noop state DOWN
group default qlen 1000
link/ether 00:00:00:00:00:00 brd ff:ff:ff:ff:ff:ff
11: erspan0 at NONE: <BROADCAST,MULTICAST> mtu 1450 qdisc noop state DOWN
group default qlen 1000
link/ether 4a:1d:ca:65:e3:ca brd ff:ff:ff:ff:ff:ff
12: ovs-ip6gre0 at NONE: <NOARP> mtu 1448 qdisc noop state DOWN group default
qlen 1
link/gre6 00:00:00:00:00:00:00:00:00:00:00:00:00:00:00:00 brd
00:00:00:00:00:00:00:00:00:00:00:00:00:00:00:00
13: ovs-ip6tnl0 at NONE: <NOARP> mtu 1452 qdisc noop state DOWN group default
qlen 1
link/tunnel6 :: brd ::
17: vm2-eth0 at if16: <BROADCAST,MULTICAST,UP,LOWER_UP> mtu 1400 qdisc noqueue
state UP group default qlen 1000
link/ether 00:00:00:00:00:02 brd ff:ff:ff:ff:ff:ff link-netnsid 0
inet 192.168.100.20/24 scope global vm2-eth0
valid_lft forever preferred_lft forever
inet6 fe80::200:ff:fe00:2/64 scope link
valid_lft forever preferred_lft forever
then i start to use iperf to check the network performence, oh, by the way,
i use geneve protocol between the two nodes, ovn-sbctl show is :
# ovn-sbctl show
Chassis "c655f877-b7ed-4bb5-a047-23521426d541"
hostname: "node1.com"
Encap geneve
ip: "10.18.124.1"
options: {csum="true"}
Port_Binding "node1"
Chassis "431b9efb-b464-42a1-a6dd-7fc6e0176137"
hostname: "node2.com"
Encap geneve
ip: "10.18.124.2"
options: {csum="true"}
Port_Binding "node2"
on node1, in network namespace vm1 i start the iperf3 as server:
#ip netns exec vm1 iperf3 -s
on node2, in network namespace vm2 i start the iper3 as client:
# ip netns exec vm2 iperf3 -c 192.168.100.10
Connecting to host 192.168.100.10, port 5201
[ 4] local 192.168.100.20 port 40708 connected to 192.168.100.10 port 5201
[ ID] Interval Transfer Bandwidth Retr Cwnd
[ 4] 0.00-1.00 sec 431 MBytes 3.61 Gbits/sec 34 253 KBytes
[ 4] 1.00-2.00 sec 426 MBytes 3.58 Gbits/sec 0 253 KBytes
[ 4] 2.00-3.00 sec 426 MBytes 3.57 Gbits/sec 0 253 KBytes
[ 4] 3.00-4.00 sec 401 MBytes 3.37 Gbits/sec 0 255 KBytes
[ 4] 4.00-5.00 sec 429 MBytes 3.60 Gbits/sec 0 255 KBytes
[ 4] 5.00-6.00 sec 413 MBytes 3.46 Gbits/sec 0 253 KBytes
[ 4] 6.00-7.00 sec 409 MBytes 3.43 Gbits/sec 0 250 KBytes
[ 4] 7.00-8.00 sec 427 MBytes 3.58 Gbits/sec 0 253 KBytes
[ 4] 8.00-9.00 sec 417 MBytes 3.49 Gbits/sec 0 250 KBytes
[ 4] 9.00-10.00 sec 385 MBytes 3.23 Gbits/sec 0 5.27 KBytes
- - - - - - - - - - - - - - - - - - - - - - - - -
[ ID] Interval Transfer Bandwidth Retr
[ 4] 0.00-10.00 sec 4.07 GBytes 3.49 Gbits/sec 34 sender
[ 4] 0.00-10.00 sec 4.07 GBytes 3.49 Gbits/sec
receiver
as you see, the bw is only 3.xxGbits/sec, but my physics eth1's bw 10000M:
# ethtool eth1
Settings for eth1:
Supported ports: [ FIBRE ]
Supported link modes: 10000baseT/Full
Supported pause frame use: Symmetric
Supports auto-negotiation: No
Supported FEC modes: Not reported
Advertised link modes: 10000baseT/Full
Advertised pause frame use: Symmetric
Advertised auto-negotiation: No
Advertised FEC modes: Not reported
Speed: 10000Mb/s
Duplex: Full
Port: Other
PHYAD: 0
Transceiver: external
Auto-negotiation: off
Supports Wake-on: d
Wake-on: d
Current message level: 0x00000007 (7)
drv probe link
Link detected: yes
when i uninstall the openvswitch-kmod package and use the openvswitch.ko in
the upstream linux kernel, like this:
# lsmod | grep openvswitch
openvswitch 95805 0
nf_defrag_ipv6 25957 2 nf_conntrack_ipv6,openvswitch
nf_nat_ipv6 6459 2 openvswitch,ip6table_nat
nf_nat_ipv4 6187 2 openvswitch,iptable_nat
nf_nat 18080 5
xt_nat,openvswitch,nf_nat_ipv6,nf_nat_masquerade_ipv4,nf_nat_ipv4
nf_conntrack 102766 10
ip_vs,nf_conntrack_ipv6,openvswitch,nf_conntrack_ipv4,nf_conntrack_netlink,nf_nat_ipv6,nf_nat_masquerade_ipv4,xt_conntrack,nf_nat_ipv4,nf_nat
libcrc32c 1388 3 ip_vs,openvswitch,xfs
# modinfo openvswitch
filename:
/lib/modules/4.9.18-19080201/kernel/net/openvswitch/openvswitch.ko
alias: net-pf-16-proto-16-family-ovs_packet
alias: net-pf-16-proto-16-family-ovs_flow
alias: net-pf-16-proto-16-family-ovs_vport
alias: net-pf-16-proto-16-family-ovs_datapath
license: GPL
description: Open vSwitch switching datapath
srcversion: 915B872C96FB1D38D107742
depends:
nf_conntrack,nf_nat,libcrc32c,nf_nat_ipv6,nf_nat_ipv4,nf_defrag_ipv6
intree: Y
vermagic: 4.9.18-19080201 SMP mod_unload modversions
and do the same test in above, and i get the follow result:
# ip netns exec vm2 iperf3 -c 192.168.100.10
Connecting to host 192.168.100.10, port 5201
[ 4] local 192.168.100.20 port 40652 connected to 192.168.100.10 port 5201
[ ID] Interval Transfer Bandwidth Retr Cwnd
[ 4] 0.00-1.00 sec 1000 MBytes 8.39 Gbits/sec 4 290 KBytes
[ 4] 1.00-2.00 sec 994 MBytes 8.34 Gbits/sec 0 292 KBytes
[ 4] 2.00-3.00 sec 1002 MBytes 8.41 Gbits/sec 0 287 KBytes
4] 3.00-4.00 sec 994 MBytes 8.34 Gbits/sec 0 292 KBytes
▽ 4] 4.00-5.00 sec 992 MBytes 8.32 Gbits/sec 0 298 KBytes
[ 4] 5.00-6.00 sec 994 MBytes 8.34 Gbits/sec 0 305 KBytes
[ 4] 6.00-7.00 sec 989 MBytes 8.29 Gbits/sec 0 313 KBytes
[ 4] 7.00-8.00 sec 992 MBytes 8.32 Gbits/sec 0 290 KBytes
[ 4] 8.00-9.00 sec 996 MBytes 8.36 Gbits/sec 0 303 KBytes
[ 4] 9.00-10.00 sec 955 MBytes 8.01 Gbits/sec 0 5.27 KBytes
- - - - - - - - - - - - - - - - - - - - - - - - -
[ ID] Interval Transfer Bandwidth Retr
[ 4] 0.00-10.00 sec 9.67 GBytes 8.31 Gbits/sec 4 sender
[ 4] 0.00-10.00 sec 9.67 GBytes 8.31 Gbits/sec
receiver
so, i cann't understand why the performence is so poor when i use the
kernel module build from the ovs tree.
anyone can give me some advice where is wrong?
thx!
-------------- next part --------------
An HTML attachment was scrubbed...
URL: <http://mail.openvswitch.org/pipermail/ovs-discuss/attachments/20191104/aa175608/attachment-0001.html>
More information about the discuss
mailing list