[ovs-dev] Packet Loss when fowarding to GRE and VXNET

Jesse Gross jesse at nicira.com
Fri Dec 30 06:20:29 UTC 2011


On Tue, Dec 27, 2011 at 2:27 AM, Simon Horman <horms at verge.net.au> wrote:
> On Tue, Dec 27, 2011 at 10:29:55AM +0900, Simon Horman wrote:
>> On Mon, Dec 26, 2011 at 03:11:40PM -0500, Jesse Gross wrote:
>> > On Fri, Dec 16, 2011 at 3:47 AM, Simon Horman <horms at verge.net.au> wrote:
>> > > Hi,
>> > >
>> > > I have observed high rates of packet loss when using OVS to "forward"
>> > > packets to a GRE or VXNET port. This packet loss does not occur to
>> > > anywhere near the same extent when OVS is used to "forward" packets to
>> > > a port that does not use tunnelling.
>> >
>> > Can you tell specifically where the packets are getting dropped?  When
>> > encapsulating 64-byte packets the tunnel overhead is nearly as large
>> > as the payload, so that could account for additional stress.
>>
>> Hi Jesse,
>>
>> Good point with regards to overhead, I had not considered that.
>>
>> I haven't isolated where the packet loss is occurring, other
>> than that the packets are received by the machine running OVS
>> and their encapsulated version is not transmitted. I'll see
>> if I can narrow things down.
>
> Hi Jesse,
>
> I looked into this a little further and it seems that most if not
> all the dropped packets are accounted as "dropped" in the qdisc on
> the outgoing ethernet interface. (perhaps that was obvious?)

It's more or less what I expected but I don't have a good answer as to
why it's happening.  The source packets are UDP from pktgen?

> I tried tweaking the qdisc, removing the default mq qdisc an replacing
> it with a pfifo with various limits between 1000 and 4096, but this did
> not seem to have an impact noticeable beyond the noise in there results.
>
> I also replotted the results previously posted. Rather than the rate
> of packet loss I have plotted the packet rate. This shows that there
> seems to be a limit a little over 600,000packets/s.
>
> The outgoing ethernet link is a 10G link (I verified that is the
> negotiated rate), so 600,000packets/s should not be a problem
> for the link, even if the packets are expanded to 128 bytes.

Out of curiosity, what happens if you send 128 byte packets that
aren't tunnelled?  I'm assuming that all the settings are the defaults
and there's nothing particularly unusual about the setup?  The result
of the routing table lookup for the destination of the tunnel is a
physical Ethernet interface and it's the same as the one used for
unencapsulated traffic?

The rate that you're measuring is the packets that are actually
transmitted on the wire?  Any interesting queue stats from the NIC?



More information about the dev mailing list