[ovs-discuss] traffic control on OVS

durga c.vijaya.durga at gmail.com
Sun Dec 11 04:16:37 UTC 2016


I see.
The idea of rate limiting the interface was to simulate congestion and
resultant packet loss as observed in a real network.

Thanks for that clarification.

Cheers!
Durga


On Fri, Dec 9, 2016 at 7:26 PM, Justin Pettit <jpettit at ovn.org> wrote:

>
> > On Dec 8, 2016, at 12:33 AM, durga <c.vijaya.durga at gmail.com> wrote:
> >
> > Hi All,
> >
> > I have configured a very basic topology in mininet with 2 switches and a
> couple of hosts.
> > I have configured tc on all the 4 ports of both the switches and set the
> tc rate set at 1Mbps. Qdisc is htb.
> >
> > Now, as I incrementally generate traffic , though I observe drops via
> iperf, I fail to observe any drops  on ovs portswhen I use 'ovs-ofctl
> dump-ports sw'
> >
> > Am I looking at wrong places are putting things together incorrectly?
> >
> > Can someone help me understand why I don't notice any drops when using
> ovs-ofctl ?
> >
> > Few logs:
> >
> > UDP client:
> > root at vd-Veriton-M200-A780:~# iperf -c 10.0.0.2 -u -b 10Mbps
> > ------------------------------------------------------------
> > Client connecting to 10.0.0.2, UDP port 5001
> > Sending 1470 byte datagrams
> > UDP buffer size:  208 KByte (default)
> > ------------------------------------------------------------
> > [ 15] local 10.0.0.1 port 53217 connected with 10.0.0.2 port 5001
> > [ ID] Interval       Transfer     Bandwidth
> > [ 15]  0.0-10.0 sec  12.5 MBytes  10.5 Mbits/sec
> > [ 15] Sent 8922 datagrams
> > [ 15] Server Report:
> > [ 15]  0.0-11.5 sec  10.7 MBytes  7.78 Mbits/sec   0.503 ms 1306/ 8921
> (15%)
> > [ 15]  0.0-11.5 sec  1 datagrams received out-of-order
> >
> >
> > OVS output:
> >
> > root at vd-Veriton-M200-A780:~# ovs-ofctl dump-ports s33
> > OFPST_PORT reply (xid=0x2): 3 ports
> >   port LOCAL: rx pkts=0, bytes=0, drop=5, errs=0, frame=0, over=0, crc=0
> >            tx pkts=0, bytes=0, drop=0, errs=0, coll=0
> >   port  1: rx pkts=885787, bytes=33619595672, drop=0, errs=0, frame=0,
> over=0, crc=0
> >            tx pkts=403525, bytes=26653335, drop=0, errs=0, coll=0
> >   port  2: rx pkts=403521, bytes=26652615, drop=0, errs=0, frame=0,
> over=0, crc=0
> >            tx pkts=884488, bytes=33617623003, drop=0, errs=0, coll=0
> >
> >
> > TC qdsic output:
> >
> > root at vd-Veriton-M200-A780:~# tc -s -d class show dev s33-eth2
> > class htb 2:1 root prio 0 quantum 100000 rate 8Mbit ceil 8Mbit linklayer
> ethernet burst 1600b/1 mpu 0b overhead 0b cburst 1600b/1 mpu 0b overhead 0b
> level 0
> >  Sent 25232342 bytes 16683 pkt (dropped 1307, overlimits 0 requeues 0)
> >  rate 0bit 0pps backlog 0b 0p requeues 0
> >  lended: 12183 borrowed: 0 giants: 0
> >  tokens: 15781 ctokens: 15781
>
> The port stats are showing the statistics on the NIC.  However, when QoS
> is configure on OVS, it's just configuring the tc (traffic control) system
> in the kernel.  That implements a software-based limiter, so the packets
> are being dropped before they hit the NIC.  It looks like your tc output is
> showing dropped packets.
>
> --Justin
>
>
>
>
-------------- next part --------------
An HTML attachment was scrubbed...
URL: <http://mail.openvswitch.org/pipermail/ovs-discuss/attachments/20161211/4cd79534/attachment.html>


More information about the discuss mailing list