[ovs-discuss] OVS bencharking results
Arno Toell
lists at toell.net
Thu May 8 12:51:57 UTC 2014
Hello,
as there have been a lot of discussions involving the OVS performance
compared to Linux bridges. I'm about to deploy OVS at large scale in our
cloud so that I decided to make some benchmarks on my own to argue about
OVS performance compared to what Linux delivers as is.
I thought you could be possibly interested in these numbers, hence I'D
like to share them with you.
Lab setup:
I've connected two Dell PowerEdge R320 (Intel Xeon E5-2407 0 @ 2.20GHz,
32 GiB RAM) together. There is no switch in between, servers are
directly cross connected. The network server adapter is a two port Intel
X520-T2 card (2 x 10G SFP+). I'm using Debian Wheezy (kernel
3.2.0-4-amd64 #1 SMP Debian 3.2.57-3) as host system. For OVS I'm using
the Debian package as well (1.4.2+git20120612-9.1~deb7u1). I did not
bother with different kernel and OVS versions, as the tested version is
what is considerable for production, and the achieved performance seems
not substantially worse than vanilla bridges.
Network configuration:
I did all tests on server 1 (tengig1). The peer was configured with
Linux LACP bonding (hashing mode layer3+4). This setup did not change
across experiments. I did not do bidirectional connection tests. Thus,
this server is mostly just for passive opening of incoming connections
for benchmarks.
server 2: tengig2 (passive side)
10.10.10.2/24 @ bond0
10.10.100.2/24 @ bond0.100 (VLAN 100)
server 1: tengig2 (active side)
untagged traffic: 10.10.10.1/24
tagged traffic: 10.10.100.1/24
interfaces vary depending on the setup
I did enable jumbo frames (MTU 9000) on all interfaces and verified,
packets are not fragmented.
Benchmarking Methodology:
I'm using nuttcp to generate constant bitrate UDP traffic. There is
nothing else going on on the link.
I've increased the sending and receive window on both sides (as I did
tests with TCP as well):
root at tengig1:~# echo 4194304 > /proc/sys/net/core/wmem_max
root at tengig1:~# echo 4194304 > /proc/sys/net/core/rmem_max
---------------------------------------------------------------------------------------
BASELINE
---------------------------------------------------------------------------------------
Linux bonding driver, no bridge at all
root at tengig1:~# cat /proc/net/bonding/bond0
Ethernet Channel Bonding Driver: v3.7.1 (April 27, 2011)
Bonding Mode: IEEE 802.3ad Dynamic link aggregation
Transmit Hash Policy: layer3+4 (1)
MII Status: up
MII Polling Interval (ms): 0
Up Delay (ms): 0
Down Delay (ms): 0
802.3ad info
LACP rate: slow
Min links: 0
Aggregator selection policy (ad_select): stable
Active Aggregator Info:
Aggregator ID: 3
Number of ports: 2
Actor Key: 33
Partner Key: 33
Partner Mac Address: 90:e2:ba:69:bd:3c
Slave Interface: eth2
MII Status: up
Speed: 10000 Mbps
Duplex: full
Link Failure Count: 0
Permanent HW addr: 90:e2:ba:69:b6:c8
Aggregator ID: 3
Slave queue ID: 0
Slave Interface: eth3
MII Status: up
Speed: 10000 Mbps
Duplex: full
Link Failure Count: 0
Permanent HW addr: 90:e2:ba:69:b6:c9
Aggregator ID: 3
Slave queue ID: 0
root at tengig1:~# nuttcp -l8000 -u -w4m -R20G -i1 -N4 10.10.10.2
1476.5778 MB / 1.00 sec = 12386.2834 Mbps 0 / 193538 ~drop/pkt
0.00 ~%loss
1476.5625 MB / 1.00 sec = 12386.1678 Mbps 0 / 193536 ~drop/pkt
0.00 ~%loss
1476.3794 MB / 1.00 sec = 12384.9662 Mbps 0 / 193512 ~drop/pkt
0.00 ~%loss
1476.2115 MB / 1.00 sec = 12383.1743 Mbps 0 / 193490 ~drop/pkt
0.00 ~%loss
1476.4252 MB / 1.00 sec = 12385.3378 Mbps 0 / 193518 ~drop/pkt
0.00 ~%loss
1476.2344 MB / 1.00 sec = 12383.3662 Mbps 0 / 193493 ~drop/pkt
0.00 ~%loss
1476.3336 MB / 1.00 sec = 12384.5822 Mbps 0 / 193506 ~drop/pkt
0.00 ~%loss
1476.2497 MB / 1.00 sec = 12383.4942 Mbps 0 / 193495 ~drop/pkt
0.00 ~%loss
1476.1734 MB / 1.00 sec = 12383.2134 Mbps 0 / 193485 ~drop/pkt
0.00 ~%loss
1476.1581 MB / 1.00 sec = 12382.7263 Mbps 0 / 193483 ~drop/pkt
0.00 ~%loss
7.5760 MB / 1.00 sec = 63.5517 Mbps 0 / 993 ~drop/pkt
0.00 ~%loss
14770.8817 MB / 10.00 sec = 12390.3865 Mbps 73 %TX 63 %RX 0 / 1936049
drop/pkt 0.00 %loss
n.b.: The traffic is nicely split across both trunk links. Yet I do not
approach the wire speed. I did not investigate why, it might be either
related to driver settings or concurrency issues. However, this
experiment serves as baseline comparison for remaining experiments
---------------------------------------------------------------------------------------
LINUX VLAN
---------------------------------------------------------------------------------------
Linux bonding driver, no bridge at all. However, use tagged frames.
root at tengig1:~# ip link add link bond0 name bond0.100 type vlan id 100
root at tengig1:~# ip addr add 10.10.100.1/24 dev bond0.100
root at tengig1:~# ip link set bond0.100 up
root at tengig1:~# ping -M do -s 8000 10.10.100.2 -c1
PING 10.10.100.2 (10.10.100.2) 8000(8028) bytes of data.
8008 bytes from 10.10.100.2: icmp_req=1 ttl=64 time=0.378 ms
--- 10.10.100.2 ping statistics ---
1 packets transmitted, 1 received, 0% packet loss, time 0ms
rtt min/avg/max/mdev = 0.378/0.378/0.378/0.000 ms
root at tengig1:~# nuttcp -l8000 -u -w4m -R20G -i1 -N4 10.10.100.2
1424.0494 MB / 1.00 sec = 11945.1470 Mbps 9604 / 196257 ~drop/pkt
4.89 ~%loss
1424.0417 MB / 1.00 sec = 11946.2297 Mbps 7833 / 194485 ~drop/pkt
4.03 ~%loss
1424.1028 MB / 1.00 sec = 11945.7980 Mbps 12335 / 198995 ~drop/pkt
6.20 ~%loss
1424.0723 MB / 1.00 sec = 11946.4380 Mbps 7681 / 194337 ~drop/pkt
3.95 ~%loss
1423.7061 MB / 1.00 sec = 11942.4582 Mbps 12371 / 198979 ~drop/pkt
6.22 ~%loss
1425.3082 MB / 1.00 sec = 11956.7944 Mbps 6061 / 192879 ~drop/pkt
3.14 ~%loss
1423.9655 MB / 1.00 sec = 11944.6580 Mbps 13831 / 200473 ~drop/pkt
6.90 ~%loss
1425.0336 MB / 1.00 sec = 11954.4664 Mbps 6065 / 192847 ~drop/pkt
3.14 ~%loss
1425.0031 MB / 1.00 sec = 11953.3975 Mbps 13667 / 200445 ~drop/pkt
6.82 ~%loss
1425.6897 MB / 1.00 sec = 11959.9467 Mbps 7569 / 194437 ~drop/pkt
3.89 ~%loss
10.3607 MB / 1.00 sec = 86.9096 Mbps -1348 / 10 ~drop/pkt
-13480.00000 ~%loss
14255.3406 MB / 10.00 sec = 11957.9606 Mbps 99 %TX 74 %RX 99368 /
1967844 drop/pkt 5.05 %loss
---------------------------------------------------------------------------------------
LINUX BRIDGE
---------------------------------------------------------------------------------------
Linux bonding driver, use the on-board Linux bridge. Changes to the
previous experiment:
ip link add veth0 type veth peer name veth1
brctl addbr br0
brctl addif br0 veth0
brctl addif br0 bond0
ip link set veth0 up
ip link set mtu 9000 dev veth0
ip link set mtu 9000 dev veth1
ip link set veth1 up
ip link set veth1 up
ip addr del 10.10.10.1/24 dev bond0
ip addr add 10.10.10.1/24 dev veth1
ip link set br0 up
root at tengig1:~# ping -M do -s 8000 10.10.10.2 -c1
PING 10.10.10.2 (10.10.10.2) 8000(8028) bytes of data.
8008 bytes from 10.10.10.2: icmp_req=1 ttl=64 time=0.376 ms
--- 10.10.10.2 ping statistics ---
1 packets transmitted, 1 received, 0% packet loss, time 0ms
rtt min/avg/max/mdev = 0.376/0.376/0.376/0.000 ms
root at tengig1:~# nuttcp -l8000 -u -w4m -R20G -i1 -N4 10.10.10.2
1250.5188 MB / 1.00 sec = 10489.9756 Mbps 0 / 163908 ~drop/pkt
0.00 ~%loss
1250.7706 MB / 1.00 sec = 10492.2870 Mbps 0 / 163941 ~drop/pkt
0.00 ~%loss
1253.0441 MB / 1.00 sec = 10511.2960 Mbps 0 / 164239 ~drop/pkt
0.00 ~%loss
1251.8082 MB / 1.00 sec = 10500.9385 Mbps 0 / 164077 ~drop/pkt
0.00 ~%loss
1250.5417 MB / 1.00 sec = 10490.3040 Mbps 0 / 163911 ~drop/pkt
0.00 ~%loss
1251.2970 MB / 1.00 sec = 10496.6610 Mbps 0 / 164010 ~drop/pkt
0.00 ~%loss
1252.7924 MB / 1.00 sec = 10509.1630 Mbps 0 / 164206 ~drop/pkt
0.00 ~%loss
1251.7242 MB / 1.00 sec = 10500.2240 Mbps 0 / 164066 ~drop/pkt
0.00 ~%loss
1251.9379 MB / 1.00 sec = 10502.0160 Mbps 0 / 164094 ~drop/pkt
0.00 ~%loss
1251.7548 MB / 1.00 sec = 10500.4905 Mbps 0 / 164070 ~drop/pkt
0.00 ~%loss
0.0381 MB / 1.00 sec = 0.3200 Mbps 0 / 5 ~drop/pkt
0.00 ~%loss
12516.2277 MB / 10.00 sec = 10499.1135 Mbps 99 %TX 53 %RX 0 / 1640527
drop/pkt 0.00 %loss
---------------------------------------------------------------------------------------
LINUX BRIDGE + Linux VLAN
---------------------------------------------------------------------------------------
Linux bonding driver, use the on-board Linux bridge. Moreover, send
tagged frames outgoing. Changes to the previous experiment:
root at tengig1:~# brctl delif br0 bond0
root at tengig1:~# brctl addif br0 bond0.100
root at tengig1:~# ip addr del 10.10.100.1/24 dev bond0.100
root at tengig1:~# ip addr add 10.10.100.1/24 dev veth1
root at tengig1:~# ping -M do -s 8000 10.10.100.2 -c1
PING 10.10.100.2 (10.10.100.2) 8000(8028) bytes of data.
8008 bytes from 10.10.100.2: icmp_req=1 ttl=64 time=0.406 ms
--- 10.10.100.2 ping statistics ---
1 packets transmitted, 1 received, 0% packet loss, time 0ms
rtt min/avg/max/mdev = 0.406/0.406/0.406/0.000 ms
root at tengig1:~# nuttcp -l8000 -u -w4m -R20G -i1 -N4 10.10.100.2
1234.6954 MB / 1.00 sec = 10357.2517 Mbps 0 / 161834 ~drop/pkt
0.00 ~%loss
1232.8644 MB / 1.00 sec = 10341.8092 Mbps 0 / 161594 ~drop/pkt
0.00 ~%loss
1233.3221 MB / 1.00 sec = 10346.1353 Mbps 0 / 161654 ~drop/pkt
0.00 ~%loss
1233.8028 MB / 1.00 sec = 10349.6189 Mbps 0 / 161717 ~drop/pkt
0.00 ~%loss
1233.2764 MB / 1.00 sec = 10345.7410 Mbps 0 / 161648 ~drop/pkt
0.00 ~%loss
1231.2317 MB / 1.00 sec = 10328.0515 Mbps 0 / 161380 ~drop/pkt
0.00 ~%loss
1230.9265 MB / 1.00 sec = 10326.0182 Mbps 0 / 161340 ~drop/pkt
0.00 ~%loss
1230.6824 MB / 1.00 sec = 10323.4539 Mbps 0 / 161308 ~drop/pkt
0.00 ~%loss
1233.2764 MB / 1.00 sec = 10345.7410 Mbps 0 / 161648 ~drop/pkt
0.00 ~%loss
1233.0704 MB / 1.00 sec = 10343.5888 Mbps 0 / 161621 ~drop/pkt
0.00 ~%loss
0.0381 MB / 1.00 sec = 0.3200 Mbps 0 / 5 ~drop/pkt
0.00 ~%loss
12327.1866 MB / 10.00 sec = 10340.5713 Mbps 99 %TX 53 %RX 0 / 1615749
drop/pkt 0.00 %loss
---------------------------------------------------------------------------------------
OVS BRIDGE + LINUX BOND
---------------------------------------------------------------------------------------
Use OVS as bridge (without brcompat module), however use the Linux
bonding driver. No VLAN tagging.
root at tengig1:~# ip link set br0 down
root at tengig1:~# brctl delbr br0
root at tengig1:~# ovs-vsctl add-br br0
root at tengig1:~# ovs-vsctl add-port br0 bond0
root at tengig1:~# ovs-vsctl add-port br0 veth0
root at tengig1:~# ping -M do -s 8000 10.10.10.2 -c1
PING 10.10.10.2 (10.10.10.2) 8000(8028) bytes of data.
8008 bytes from 10.10.10.2: icmp_req=1 ttl=64 time=0.547 ms
--- 10.10.10.2 ping statistics ---
1 packets transmitted, 1 received, 0% packet loss, time 0ms
rtt min/avg/max/mdev = 0.547/0.547/0.547/0.000 ms
root at tengig1:~# nuttcp -l8000 -u -w4m -R20G -i1 -N4 10.10.10.2
1464.8056 MB / 1.00 sec = 12287.0779 Mbps 0 / 191995 ~drop/pkt
0.00 ~%loss
1462.2574 MB / 1.00 sec = 12266.8437 Mbps 0 / 191661 ~drop/pkt
0.00 ~%loss
1462.3032 MB / 1.00 sec = 12266.2219 Mbps 0 / 191667 ~drop/pkt
0.00 ~%loss
1462.8525 MB / 1.00 sec = 12271.7501 Mbps 0 / 191739 ~drop/pkt
0.00 ~%loss
1461.2808 MB / 1.00 sec = 12257.6585 Mbps 0 / 191533 ~drop/pkt
0.00 ~%loss
1455.8945 MB / 1.00 sec = 12213.3799 Mbps 0 / 190827 ~drop/pkt
0.00 ~%loss
1455.6351 MB / 1.00 sec = 12210.2758 Mbps 0 / 190793 ~drop/pkt
0.00 ~%loss
1456.1310 MB / 1.00 sec = 12215.4128 Mbps 0 / 190858 ~drop/pkt
0.00 ~%loss
1456.5887 MB / 1.00 sec = 12218.2755 Mbps 0 / 190918 ~drop/pkt
0.00 ~%loss
1455.5359 MB / 1.00 sec = 12210.3840 Mbps 0 / 190780 ~drop/pkt
0.00 ~%loss
0.0763 MB / 1.00 sec = 0.6400 Mbps 0 / 10 ~drop/pkt
0.00 ~%loss
14593.3609 MB / 10.00 sec = 12241.4789 Mbps 99 %TX 62 %RX 0 / 1912781
drop/pkt 0.00 %loss
---------------------------------------------------------------------------------------
OVS BRIDGE + OVS BOND
---------------------------------------------------------------------------------------
Same setup as before, however do not use the Linux bonding driver but
use OVS' on board LACP support. Note that only one link is used. The
other one is purely passive. I am not sure if/how I can convince OVS to
do L4 hashing and use both links simultaneously - I thought I did though.
root at tengig1:~# ovs-vsctl del-port bond0
root at tengig1:~# ifdown bond0
root at tengig1:~# ovs-vsctl add-bond br0 bond0 eth2 eth3 lacp=active
root at tengig1:~# ovs-vsctl set Port bond0 bond_mode=balance-tcp
root at tengig1:~# ip link set eth2 up
root at tengig1:~# ip link set eth3 up
root at tengig1:~# ip link set mtu 9000 dev eth2
root at tengig1:~# ip link set mtu 9000 dev eth3
root at tengig1:~# ping -M do -s 8000 10.10.10.2 -c1
PING 10.10.10.2 (10.10.10.2) 8000(8028) bytes of data.
8008 bytes from 10.10.10.2: icmp_req=1 ttl=64 time=0.616 ms
--- 10.10.10.2 ping statistics ---
1 packets transmitted, 1 received, 0% packet loss, time 0ms
rtt min/avg/max/mdev = 0.616/0.616/0.616/0.000 ms
root at tengig1:~# ovs-appctl bond/show bond0
bond_mode: balance-tcp
bond-hash-algorithm: balance-tcp
bond-hash-basis: 0
updelay: 0 ms
downdelay: 0 ms
next rebalance: 7321 ms
lacp_negotiated: true
slave eth3: enabled
may_enable: true
slave eth2: enabled
active slave
may_enable: true
root at tengig1:~# nuttcp -l8000 -u -w4m -R20G -i1 -N4 10.10.10.2
1180.9921 MB / 1.00 sec = 9906.5729 Mbps 7912 / 162707 ~drop/pkt
4.86 ~%loss
1181.9611 MB / 1.00 sec = 9915.2460 Mbps 8912 / 163834 ~drop/pkt
5.44 ~%loss
1182.7850 MB / 1.00 sec = 9921.7613 Mbps 10108 / 165138 ~drop/pkt
6.12 ~%loss
1182.7469 MB / 1.00 sec = 9921.7587 Mbps 9580 / 164605 ~drop/pkt
5.82 ~%loss
1181.3583 MB / 1.00 sec = 9909.7736 Mbps 9608 / 164451 ~drop/pkt
5.84 ~%loss
1182.5333 MB / 1.00 sec = 9919.9866 Mbps 9612 / 164609 ~drop/pkt
5.84 ~%loss
1183.2047 MB / 1.00 sec = 9925.3010 Mbps 9684 / 164769 ~drop/pkt
5.88 ~%loss
1181.8161 MB / 1.00 sec = 9913.9407 Mbps 8960 / 163863 ~drop/pkt
5.47 ~%loss
1180.7327 MB / 1.00 sec = 9904.5158 Mbps 9668 / 164429 ~drop/pkt
5.88 ~%loss
1181.2363 MB / 1.00 sec = 9909.0965 Mbps 8928 / 163755 ~drop/pkt
5.45 ~%loss
7.9956 MB / 1.00 sec = 67.0703 Mbps 80 / 1128 ~drop/pkt
7.09 ~%loss
11827.3621 MB / 10.00 sec = 9921.2634 Mbps 99 %TX 53 %RX 94036 /
1644272 drop/pkt 5.72 %loss
---------------------------------------------------------------------------------------
OVS BRIDGE + OVS BOND + OVS VLAN
---------------------------------------------------------------------------------------
All of it. Use the OVS bridge, OVS bonding driver and the OVS VLAN tag
support. I tag the veth port on access.
root at tengig1:~# ovs-vsctl del-port veth0
root at tengig1:~# ovs-vsctl add-port br0 veth0 tag=100
root at tengig1:~# ping -M do -s 8000 10.10.100.2 -c1
PING 10.10.100.2 (10.10.100.2) 8000(8028) bytes of data.
8008 bytes from 10.10.100.2: icmp_req=1 ttl=64 time=0.550 ms
--- 10.10.100.2 ping statistics ---
1 packets transmitted, 1 received, 0% packet loss, time 0ms
rtt min/avg/max/mdev = 0.550/0.550/0.550/0.000 ms
root at tengig1:~# nuttcp -l8000 -u -w4m -R20G -i1 -N4 10.10.10.2
1180.7480 MB / 1.00 sec = 9904.7429 Mbps 8897 / 163660 ~drop/pkt
5.44 ~%loss
1181.1752 MB / 1.00 sec = 9908.4457 Mbps 9427 / 164246 ~drop/pkt
5.74 ~%loss
1181.5643 MB / 1.00 sec = 9911.6701 Mbps 8821 / 163691 ~drop/pkt
5.39 ~%loss
1181.7474 MB / 1.00 sec = 9913.2160 Mbps 9775 / 164669 ~drop/pkt
5.94 ~%loss
1180.4886 MB / 1.00 sec = 9902.6461 Mbps 8517 / 163246 ~drop/pkt
5.22 ~%loss
1180.1910 MB / 1.00 sec = 9899.7937 Mbps 8835 / 163525 ~drop/pkt
5.40 ~%loss
1181.8924 MB / 1.00 sec = 9914.7988 Mbps 9709 / 164622 ~drop/pkt
5.90 ~%loss
1181.2973 MB / 1.00 sec = 9909.0635 Mbps 8371 / 163206 ~drop/pkt
5.13 ~%loss
1181.0074 MB / 1.00 sec = 9907.3746 Mbps 9672 / 164469 ~drop/pkt
5.88 ~%loss
1182.1518 MB / 1.00 sec = 9916.2808 Mbps 7652 / 162599 ~drop/pkt
4.71 ~%loss
7.9651 MB / 1.00 sec = 66.8165 Mbps 60 / 1104 ~drop/pkt
5.43 ~%loss
0.0000 MB / 1.00 sec = 0.0000 Mbps 0 / 0 ~drop/pkt
0.00 ~%loss
11820.2286 MB / 10.00 sec = 9915.1357 Mbps 99 %TX 49 %RX 90806 /
1640107 drop/pkt 5.54 %loss
Let me know in a timely manner if you want me to verify other
experiments, or different setups. The lab setup is still up and running,
but I'm likely going to shut it down soon.
Overall I'm very happy with the delivered OVS performance. According to
other threads on this list I feared it could be much worse, but it
doesn't seem this would be the case - at least for OVS 1.4.
--
Arno Töll
GnuPG Key-ID: 0x9D80F36D
More information about the discuss
mailing list