[ovs-discuss] Sending UDP traffic in openflow network

Ben Pfaff blp at nicira.com
Fri Oct 28 23:32:20 UTC 2011

You said earlier that when you turn off policing you still get packet
loss.  So I doubt that QoS or policing is the culprit.  Figure out why
you get packet loss without QoS or policing first, then try to apply
one of them or the other.

On Fri, Oct 28, 2011 at 06:27:30PM -0500, Shan Hu wrote:
> Hi Ben,
> I just tried the Queue tables, what i used is linux-htb which seems to be also a token buket approach,so when i sent UDP traffic,t here is still packet lost. Then i tried to use linux-hfsc(i just simply follow the configuration cookbook of linux-htb, and change linux-htb to linux-hfsc),there is still packet lost, maybe im wrong with the configuration of linux-hfsc since the rate limiting doesnt work, but is there a way to stop packet lost in UDP traffic?
> Shan
> ----- Original Message -----
> From: "Shan Hu" <shan.hu at utdallas.edu>
> To: "Ben Pfaff" <blp at nicira.com>
> Cc: discuss at openvswitch.org
> Sent: Wednesday, October 26, 2011 10:55:20 PM
> Subject: Re: [ovs-discuss] Sending UDP traffic in openflow network
> Thank you for the reply, Ben.
> Yes im using policing, but even when i set policing_rate and burst to 0,there is still packet lost.
> I will try to use the Queue tables anyway.
> But also i have question for this Queue tables QoS,will bandwidth of one queue be 
> reserved all the time? By that i mean,say i reserve 20Mbps for queue1,and there is another queue2 i reserved
> 90Mbps for, however,the total bandwidth of this link is only 100Mbps. So when i push 20Mbps and 90Mbps
> data to two queues,respectively, will they compromise to each other?
> regards,
> Shan
> ----- Original Message -----
> From: "Ben Pfaff" <blp at nicira.com>
> To: "Shan Hu" <shan.hu at utdallas.edu>
> Cc: discuss at openvswitch.org
> Sent: Wednesday, October 26, 2011 10:24:13 PM
> Subject: Re: [ovs-discuss] Sending UDP traffic in openflow network
> On Wed, Oct 26, 2011 at 10:22:18PM -0500, Shan Hu wrote:
> > Im trying to test the QoS Rate-Limiting of Kernel vSwitch, i use iperf
> > as my measurement tool.  Everything is working fine with TCP part,that
> > is, after i limit rate of one port to, say 50Mbps, the rate is limited
> > to 50Mbps correctly and packets are tranferred 100%.  But when i turn
> > to UDP part,i ran into problems.I have to limit bandwidth to at most
> > 4Mbps in order to tranfer 100% packets.And if i use bandwidth more
> > than 4Mbps, the lost packets increase, the worst packet lost
> > percentage is almost 99%.
> Are you using policing?  The documentation says a lot about problems
> with policing:
>      Ingress Policing:
>        These  settings  control  ingress policing for packets received on this
>        interface.  On a physical interface, this  limits  the  rate  at  which
>        traffic  is  allowed  into  the  system  from the outside; on a virtual
>        interface (one connected to a virtual machine), this limits the rate at
>        which the VM is able to transmit.
>        Policing is a simple form of quality-of-service that simply drops pack-
>        ets received in excess of the configured rate.  Due to its  simplicity,
>        policing  is  usually  less accurate and less effective than egress QoS
>        (which is configured using the QoS and Queue tables).
>        Policing is currently implemented only on Linux.  The Linux implementa-
>        tion uses a simple ``token bucket'' approach:
>               o      The  size  of  the  bucket  corresponds to ingress_polic-
>                      ing_burst.  Initially the bucket is full.
>               o      Whenever a packet is received,  its  size  (converted  to
>                      tokens)  is compared to the number of tokens currently in
>                      the bucket.  If the required number of tokens are  avail-
>                      able, they are removed and the packet is forwarded.  Oth-
>                      erwise, the packet is dropped.
>               o      Whenever it is not full,  the  bucket  is  refilled  with
>                      tokens at the rate specified by ingress_policing_rate.
>        Policing  interacts  badly  with some network protocols, and especially
>        with fragmented IP packets.   Suppose  that  there  is  enough  network
>        activity to keep the bucket nearly empty all the time.  Then this token
>        bucket algorithm will forward a single packet every so often, with  the
>        period depending on packet size and on the configured rate.  All of the
>        fragments of an IP packets are normally transmitted back-to-back, as  a
>        group.   In  such  a  situation, therefore, only one of these fragments
>        will be forwarded and the rest will be dropped.  IP  does  not  provide
>        any  way for the intended recipient to ask for only the remaining frag-
>        ments.  In such a case there are two likely possibilities for what will
>        happen next: either all of the fragments will eventually be retransmit-
>        ted (as TCP will do), in which case the same problem will recur, or the
>        sender  will not realize that its packet has been dropped and data will
>        simply be lost (as some UDP-based protocols will do).  Either  way,  it
>        is possible that no forward progress will ever occur.
>        ingress_policing_rate: integer, at least 0
>               Maximum rate for data received on this interface, in kbps.  Data
>               received faster than this  rate  is  dropped.   Set  to  0  (the
>               default) to disable policing.
>        ingress_policing_burst: integer, at least 0
>               Maximum  burst  size for data received on this interface, in kb.
>               The default burst size if set to 0 is 1000 kb.  This  value  has
>               no effect if ingress_policing_rate is 0.
>               Specifying  a  larger burst size lets the algorithm be more for-
>               giving, which is important for protocols like TCP that react se-
>               verely  to  dropped  packets.  The burst size should be at least
>               the size of the interface's MTU.  Specifying  a  value  that  is
>               numerically  at  least  as large as 10% of ingress_policing_rate
>               helps TCP come closer to achieving the full rate.
> If you're not using policing, please tell us about your configuration.
> _______________________________________________
> discuss mailing list
> discuss at openvswitch.org
> http://openvswitch.org/mailman/listinfo/discuss

More information about the discuss mailing list