[ovs-discuss] Using openvswitch with mpls and tcp

Juan Luis de la Cruz juanlucruz at gmail.com
Tue Jul 4 14:40:25 UTC 2017


Hi,

Im having issues using openvswitch and mpls. In this case scenario, we 
use MPLS labeling, and Open vSwitch as software-switches. We are using 2 
server nodes with ovs 2.6.0, with kernel modules loaded, and 2 hosts.

They are directly connected through 1 Gigabit Ethernet connections, and 
there is arround 1 ms of rtt, and in the case of the first packet less 
than 3 ms (using ping utility). Im using Iperf3 for doing the tests. The 
first test is the performance reached without using mpls labeling, and 
the second test is using mpls labeling. The MTU is adjusted so as not to 
have fragmentation. I tried adjusting the congestion window and other 
parameters like TCP algorithm used.

|mar jul 4 12:21:09 CEST 2017 Connecting to host 192.168.20.2, port 5201 
[ 4] local 192.168.20.1 port 43526 connected to 192.168.20.2 port 5201 [ 
ID] Interval Transfer Bandwidth Retr Cwnd [ 4] 0.00-1.00 sec 112 MBytes 
943 Mbits/sec 0 450 KBytes [ 4] 1.00-2.00 sec 112 MBytes 937 Mbits/sec 0 
516 KBytes [ 4] 2.00-3.00 sec 112 MBytes 938 Mbits/sec 0 571 KBytes [ 4] 
3.00-4.00 sec 112 MBytes 937 Mbits/sec 0 625 KBytes [ 4] 4.00-5.00 sec 
112 MBytes 943 Mbits/sec 0 633 KBytes [ 4] 5.00-6.00 sec 111 MBytes 933 
Mbits/sec 0 633 KBytes [ 4] 6.00-7.00 sec 111 MBytes 933 Mbits/sec 0 664 
KBytes [ 4] 7.00-8.00 sec 112 MBytes 944 Mbits/sec 0 664 KBytes [ 4] 
8.00-9.00 sec 111 MBytes 933 Mbits/sec 0 697 KBytes [ 4] 9.00-9.16 sec 
18.8 MBytes 977 Mbits/sec 0 697 KBytes - - - - - - - - - - - - - - - - - 
- - - - - - - - [ ID] Interval Transfer Bandwidth Retr [ 4] 0.00-9.16 
sec 1.00 GBytes 939 Mbits/sec 0 sender [ 4] 0.00-9.16 sec 1022 MBytes 
935 Mbits/sec receiver iperf Done. <-----------> mar jul 4 12:40:10 CEST 
2017 Connecting to host 192.168.20.2, port 5201 [ 4] local 192.168.20.1 
port 43530 connected to 192.168.20.2 port 5201 [ ID] Interval Transfer 
Bandwidth Retr Cwnd [ 4] 0.00-1.00 sec 203 KBytes 1.66 Mbits/sec 57 2.82 
KBytes [ 4] 1.00-2.00 sec 398 KBytes 3.26 Mbits/sec 124 2.82 KBytes [ 4] 
2.00-3.00 sec 400 KBytes 3.28 Mbits/sec 124 2.82 KBytes [ 4] 3.00-4.00 
sec 319 KBytes 2.61 Mbits/sec 124 2.82 KBytes [ 4] 4.00-5.00 sec 398 
KBytes 3.26 Mbits/sec 126 2.82 KBytes [ 4] 5.00-6.00 sec 395 KBytes 3.24 
Mbits/sec 124 2.82 KBytes [ 4] 6.00-7.00 sec 398 KBytes 3.26 Mbits/sec 
126 2.82 KBytes [ 4] 7.00-8.00 sec 324 KBytes 2.66 Mbits/sec 124 2.82 
KBytes [ 4] 8.00-9.00 sec 398 KBytes 3.26 Mbits/sec 124 2.82 KBytes [ 4] 
9.00-10.00 sec 400 KBytes 3.28 Mbits/sec 126 2.82 KBytes - - - - - - - - 
- - - - - - - - - - - - - - - - - [ ID] Interval Transfer Bandwidth Retr 
[ 4] 0.00-10.00 sec 3.55 MBytes 2.98 Mbits/sec 1179 sender [ 4] 
0.00-10.00 sec 3.42 MBytes 2.87 Mbits/sec receiver |

I know there are issues using MPLS and using ovs, but there are some 
facts that are weird in this case:

  * If i use UDP instead of TCP, there is one packet out of order, but
    the rest are good, so packets are using kernel datapath i guess.
  * There are 9 packets lost at the start of the TCP transmission, and
    there are more packets lost periodically. Looking the tcpdump
    traces, those packets are "missing" in the first node, because in
    the second hop they are not captured.
  * As you can see above, the performance using TCP without MPLS
    labeling is very good.

Any idea why is this happening, or how can i solve it?
-------------- next part --------------
An HTML attachment was scrubbed...
URL: <http://mail.openvswitch.org/pipermail/ovs-discuss/attachments/20170704/6716bffe/attachment.html>


More information about the discuss mailing list