[ovs-discuss] Replacing IPsec-GRE tunnel ports
aatteka at nicira.com
Wed Nov 23 22:26:56 UTC 2016
On Wed, Nov 23, 2016 at 12:29 AM, Bolesław Tokarski
<boleslaw.tokarski at gmail.com> wrote:
> I would love this to be a problem of the testing tool. It does not seem to
> be the case, though. I ran iperf3 in the default mode, which is TCP, and it
> was the same command tested on OVS with VLAN tagging and without it - it
> achieved 870Mbps without VLANs.
> Ansis, your wireshark suggestion was in the right direction - although I did
> test outgoing packets with tcpdump already, but these were ESP, so not
> saying a lot. Now I made a test of the traffic flowing through the internal
> interface, and indeed there's some weird going on.
1. For the bad case did you see ESP packets getting fragmented? The
PCAP file you attached only has iperf packets so I can't tell that.
2. Also, you did not explicitly mention if packet capture was gathered
on sender (10.100.0.3) or receiver (10.100.0.4). However, I would be
inclined to guess that you ran tcpdump on receiver (10.100.0.4),
because of latency pattern in TCP three-way handshake.
First of all before troubleshooting any iperf TCP performance issues I
would recommend you to do several iperf UDP tests with -b flag,
because TCP flow control introduces a lot of variables that I have to
speculate about. Run this UDP test couple times and try to guess
"optimal" target bandwidth when drops are still close to 0% and also
keep attention to packet reordering.
Now getting back to the TCP packet capture that you sent over to me...
what I see in Wirehark's "TCP stream graph analysis tool" is:
1. that TCP data segments are received in bursts that are consistently
separated by ~0.25s dormant intervals. Since the packet capture was
gathered on receiver and not on the sender it could mean two things:
1.1. Either the TCP ACK from receiver to sender was delayed for one
reason or another. Hence, TCP flow control kicked in and slowed down
the data send rate on sender; OR
1.2. Either TCP data segments in 0.25s burst-rate fashion were delayed
from sender to receiver. Since receiver did not receive any data it
could not acknowledge it and tell sender to send packets at higher
rate. This is more likely scenario (see point #2).
2. There is almost always one TCP segment from the next burst of TCP
data segments that appears prematurely in previous burst. This makes
me think that sender actually did send out more data except it was
queued somewhere (see point #1.2).
3. There are bunch of out-of-order TCP segments within the "burst" as
well. I would be interested to find out if UDP test would confirm the
same packet reordering.
4. can you monitor "ovs-dpctl show" stats in tight loop and see if
upcalls to ovs-vswitchd increase in 0.25 second pattern as well? This
would prove or disprove if OVS is queuing packets and introduces this
> The untagged traffic was a smooth flow of TCP sends and ACKs, the tagged
> traffic is more interesting. I get a significant number of losses,
> retransmissions, TCP out-of-order notes, there's even an RST near the end.
> Packets are marked as 'don't fragment'. MTU on the interface is 1394,
> raising it to 1420 makes the traffic flatten out to 0, lowering it does not
> seem to make a difference.
> I am attaching the packet dump from the capped communication, the non-VLAN
> comms produced 200MB packet capture during the same 2s, so not that sexy to
> transfer over email.
> I am trying to bring up two Ubuntu 15.04 VMs, this version has OVS 2.3.2,
> but an older kernel, 3.19, I'll try to see on which other environment I can
> or can not reproduce the problem. I'm afraid I'm not capable enough to
> dismantle the kernel and see which code path does one traffic go through and
> the other does not.
> Best regards,
> Bolesław Tokarski
More information about the discuss