[ovs-discuss] OVSK flow keeps minimal rate

Ben Pfaff blp at ovn.org
Mon Oct 9 21:57:19 UTC 2017


On Mon, Oct 09, 2017 at 11:25:05PM +0200, Mita Cokic wrote:
> I was executing following test case - on 30Mbps link limited OVSK instance
> data was pushed as two TCP flows:
> - Flow A: 200Kbps
> - Flow B: 300Mbps
> 
> I have observed that Flow A bandwidth drops for around 10% but still manages
> to keep constant rate at about 180Kbps.
> 
> When this test case is repeated with Flow A bandwidth increased to 30Mbps,
> both flows start to fluctuate, as expected behavior of congestion avoidance
> algorithm (default, cubic).
> 
> Did anyone experience something like this? Does OVSK tries to keep flows
> alive guaranteeing some minimal throughput?

I don't know what OVSK is.  The following answer is for OVS.

Did you read the documentation and the FAQ?  Some relevant excerpts
below.

       Policing is a simple form of quality-of-service that simply drops pack‐
       ets received in excess of the configured rate. Due to  its  simplicity,
       policing  is  usually  less accurate and less effective than egress QoS
       (which is configured using the QoS and Queue tables).

       Policing is currently implemented on Linux  and  OVS  with  DPDK.  Both
       implementations use a simple ``token bucket’’ approach:

              ·      The  size  of  the  bucket  corresponds to ingress_polic‐
                     ing_burst. Initially the bucket is full.

              ·      Whenever a packet is received,  its  size  (converted  to
                     tokens)  is compared to the number of tokens currently in
                     the bucket. If the required number of tokens  are  avail‐
                     able,  they are removed and the packet is forwarded. Oth‐
                     erwise, the packet is dropped.

              ·      Whenever it is not full,  the  bucket  is  refilled  with
                     tokens at the rate specified by ingress_policing_rate.

       Policing  interacts  badly  with some network protocols, and especially
       with fragmented IP packets. Suppose that there is enough network activ‐
       ity  to  keep  the  bucket  nearly  empty all the time. Then this token
       bucket algorithm will forward a single packet every so often, with  the
       period  depending on packet size and on the configured rate. All of the
       fragments of an IP packets are normally transmitted back-to-back, as  a
       group. In such a situation, therefore, only one of these fragments will
       be forwarded and the rest will be dropped. IP does not provide any  way
       for  the intended recipient to ask for only the remaining fragments. In
       such a case there are two likely possibilities  for  what  will  happen
       next:  either all of the fragments will eventually be retransmitted (as
       TCP will do), in which case the same problem will recur, or the  sender
       will  not realize that its packet has been dropped and data will simply
       be lost (as some UDP-based protocols will do). Either way, it is possi‐
       ble that no forward progress will ever occur.

Q: How do I configure ingress policing?

    A: A policing policy can be configured on an interface to drop packets that
    arrive at a higher rate than the configured value.  For example, the
    following commands will rate-limit traffic that vif1.0 may generate to
    10Mbps:

        $ ovs-vsctl set interface vif1.0 ingress_policing_rate=10000
        $ ovs-vsctl set interface vif1.0 ingress_policing_burst=8000

    Traffic policing can interact poorly with some network protocols and can
    have surprising results.  The "Ingress Policing" section of
    ovs-vswitchd.conf.db(5) discusses the issues in greater detail.

Q: I configured QoS, correctly, but my measurements show that it isn't working
as well as I expect.

    A: With the Linux kernel, the Open vSwitch implementation of QoS has two
    aspects:

    - Open vSwitch configures a subset of Linux kernel QoS features, according
      to what is in OVSDB.  It is possible that this code has bugs.  If you
      believe that this is so, then you can configure the Linux traffic control
      (QoS) stack directly with the "tc" program.  If you get better results
      that way, you can send a detailed bug report to bugs at openvswitch.org.

      It is certain that Open vSwitch cannot configure every Linux kernel QoS
      feature.  If you need some feature that OVS cannot configure, then you
      can also use "tc" directly (or add that feature to OVS).

    - The Open vSwitch implementation of OpenFlow allows flows to be directed
      to particular queues.  This is pretty simple and unlikely to have serious
      bugs at this point.

    However, most problems with QoS on Linux are not bugs in Open vSwitch at
    all.  They tend to be either configuration errors (please see the earlier
    questions in this section) or issues with the traffic control (QoS) stack
    in Linux.  The Open vSwitch developers are not experts on Linux traffic
    control.  We suggest that, if you believe you are encountering a problem
    with Linux traffic control, that you consult the tc manpages (e.g. tc(8),
    tc-htb(8), tc-hfsc(8)), web resources (e.g. http://lartc.org/), or mailing
    lists (e.g. http://vger.kernel.org/vger-lists.html#netdev).


More information about the discuss mailing list