[ovs-discuss] Packet drops with high rate of Packet_In

Anton Matsiuk anton.matsiuk at gmail.com
Wed Nov 27 15:59:02 UTC 2013


Dear Ben,

thank you for your response. I am currently continue to test OVS and
investigated another following problem:

I increase number the load -  500 UDP flows with 10 consequent packets in
each of them sent in 0.2 sec interval.  miss_send_len is set to 65535, no
rules are installed to OVS, so it should generate 5000 Packet_In messages
to controller re-sending full packet (180 bytes size in tests).

1. ovs-dpctl -s show br0   -  shows 5000 packets on ingress interface and
all of them marked as missed (or part f then as hit because of being in the
same flow) and passed to user-space (no drops or lost packets)
2. ovs-ofctl dump-ports, dump-tables   -  shows the same: 5000 packet
processed
3. ovs-appctl bridge/dump-flows br0  -  shows that 5000 packets sent to
controller:
table_id=254, duration=11s, priority=0, n_packets=5000, n_bytes=810000,
priority=0, reg0=0x1, actions=controller(reason=no_match)
4. ovs-vswitchd.log shows only shrinking of hash table and no other
warnings or infos

However when I run tcpdump on loopback beetween OVS and controller I see
that only part of Packet_In appears there (around 3800 out of 5000). I
calculate it according to number of bytes transferred to controller,
because part of packet_In are joined into tcp segments (with up to MTU =
16384), no drops in kernel is shown. The same number of Packet_In messages
(~3800) are parsed and received by controller.
The same shows statistics of Lo interface - the same number of packets and
bytes appeared and no drops (no more other traffic on this loopback except
hello's)

If I decrease number of flows (or packets per flow) such that total number
of packets < 3500 then all of them arrive to controller.

Is it possible that OVS becomes overloaded with high number of incoming
packets and generate Packet_In messages not to all of them without logging
it?

On 25 November 2013 19:08, Ben Pfaff <blp at nicira.com> wrote:

> On Mon, Nov 25, 2013 at 03:15:07PM +0100, Anton Matsiuk wrote:
> > as Open vSwitch stores only 256 Packet_Ins in buffer
> (OFPT_FEATURES_REPLY:
> > n_buffers = 256) after 256 packets it sets Buffer_ID to 0xffffffff.
> > Controller didn't check this value as incorrect and placed it into
> > responses in flow_mods and packet_outs, that caused packet drops after
> > approximately 300 packet_in in a burst.
> >
> > BTW, setting miss_send_len = 65535 forces switch to send complete
> Packet_In
> > to Controller but switch still stores packets in buffer and adds valid
> > buffer_id for first 255 of them. Is it possible to turn off buffering in
> > switch (and to set buffer_id = -1 for all packets) using OpenFlow 1.0?
> (In
> > OpenFlow 1.2+ it should be possible by setting OFPCML_NO_BUFFER as I
> > understood).
>
> I think that we recently changed OVS to not buffer any packets sent
> with a miss_send_len of 65535.  Have you tried with OVS 2.0 or later?
>
> > Is it possible to vary the size of buffer for Packet_In in Open vSwitch?
>
> No, this is the first request I've heard for that feature.  (I've
> often thought about just removing packet buffering entirely, it's
> always been optional in OpenFlow and I'm not convinced it is useful.)
>
-------------- next part --------------
An HTML attachment was scrubbed...
URL: <http://openvswitch.org/pipermail/ovs-discuss/attachments/20131127/d441e758/attachment-0001.html>


More information about the discuss mailing list