[ovs-discuss] Is it possible to limit bandwidth for particular OVS port?

Tim Bagr timbgr at gmail.com
Tue Mar 31 02:19:07 UTC 2015


Thanks for the reply!

Good news for now. I've tried the same config with another (non-egress)
port of my OVS switch *br3* and now the queue is actually working.


*# ovs-vsctl set port vnet0 qos=@nqos -- --id=@nqos create qos
type=linux-htb other-config:max-rate=10000000 queues=123=@q1 --
--id=@q1 create queue other-config:max-rate=1000*
55bb4a0f-d818-4292-91cc-ba6a5c3e4d66
5858733e-d6a0-4324-89fb-af9526fe4768

vnet0 here is port where guest VM is connected

*# ovs-ofctl show br3*
OFPT_FEATURES_REPLY (xid=0x2): dpid:00007a42d1ca7e05
n_tables:254, n_buffers:256
capabilities: FLOW_STATS TABLE_STATS PORT_STATS QUEUE_STATS ARP_MATCH_IP
actions: OUTPUT SET_VLAN_VID SET_VLAN_PCP STRIP_VLAN SET_DL_SRC
SET_DL_DST SET_NW_SRC SET_NW_DST SET_NW_TOS SET_TP_SRC SET_TP_DST
ENQUEUE
 3(b2p): addr:ae:a4:9d:8b:9f:a9
     config:     0
     state:      0
     speed: 0 Mbps now, 0 Mbps max* 7(vnet0): addr:fe:54:00:ac:e1:a6*
     config:     0
     state:      0
     current:    10MB-FD COPPER
     speed: 10 Mbps now, 0 Mbps max
 8(vnet1): addr:fe:54:00:9b:d9:a1
     config:     0
     state:      0
     current:    10MB-FD COPPER
     speed: 10 Mbps now, 0 Mbps max
 LOCAL(br3): addr:7a:42:d1:ca:7e:05
     config:     0
     state:      0
     speed: 0 Mbps now, 0 Mbps max
OFPT_GET_CONFIG_REPLY (xid=0x4): frags=normal miss_send_len=0


*# ovs-ofctl add-flow br3
"priority=10,ip,ip_src=10.1.0.1,ip_dst=192.168.126.177,actions=enqueue:7:123"*

Where 10.1.0.1 is the outer PC which the guest VM pings. And
192.168.126.177 is the guest VM IP

*[root at VM ~]# ping 10.1.0.1 -i 0.1 -s 250*
PING 10.1.0.1 (10.1.0.1) 250(278) bytes of data.
258 bytes from 10.1.0.1: icmp_seq=1 ttl=63 time=0.350 ms
258 bytes from 10.1.0.1: icmp_seq=2 ttl=63 time=0.195 ms
258 bytes from 10.1.0.1: icmp_seq=3 ttl=63 time=0.205 ms
258 bytes from 10.1.0.1: icmp_seq=4 ttl=63 time=0.192 ms
258 bytes from 10.1.0.1: icmp_seq=5 ttl=63 time=0.202 ms
258 bytes from 10.1.0.1: icmp_seq=6 ttl=63 time=0.216 ms
258 bytes from 10.1.0.1: icmp_seq=7 ttl=63 time=0.215 ms
258 bytes from 10.1.0.1: icmp_seq=8 ttl=63 time=0.187 ms
258 bytes from 10.1.0.1: icmp_seq=9 ttl=63 time=0.178 ms
258 bytes from 10.1.0.1: icmp_seq=10 ttl=63 time=0.207 ms
258 bytes from 10.1.0.1: icmp_seq=11 ttl=63 time=0.229 ms
258 bytes from 10.1.0.1: icmp_seq=12 ttl=63 time=0.189 ms
258 bytes from 10.1.0.1: icmp_seq=13 ttl=63 time=93.4 ms
258 bytes from 10.1.0.1: icmp_seq=14 ttl=63 time=188 ms
258 bytes from 10.1.0.1: icmp_seq=15 ttl=63 time=274 ms
258 bytes from 10.1.0.1: icmp_seq=16 ttl=63 time=370 ms
258 bytes from 10.1.0.1: icmp_seq=17 ttl=63 time=454 ms
258 bytes from 10.1.0.1: icmp_seq=18 ttl=63 time=547 ms
258 bytes from 10.1.0.1: icmp_seq=19 ttl=63 time=633 ms
258 bytes from 10.1.0.1: icmp_seq=20 ttl=63 time=727 ms
258 bytes from 10.1.0.1: icmp_seq=21 ttl=63 time=812 ms
258 bytes from 10.1.0.1: icmp_seq=22 ttl=63 time=907 ms
258 bytes from 10.1.0.1: icmp_seq=23 ttl=63 time=992 ms
258 bytes from 10.1.0.1: icmp_seq=24 ttl=63 time=1085 ms
258 bytes from 10.1.0.1: icmp_seq=25 ttl=63 time=1170 ms
258 bytes from 10.1.0.1: icmp_seq=26 ttl=63 time=1265 ms
258 bytes from 10.1.0.1: icmp_seq=27 ttl=63 time=1352 ms

As we can see - the flow is actually working!

*# ovs-ofctl dump-flows br3*
NXST_FLOW reply (xid=0x4):
 cookie=0x0, duration=598.125s, table=0, n_packets=438,
n_bytes=119562, idle_age=9, priority=0 actions=NORMAL
 cookie=0x0, duration=53.638s, table=0, n_packets=250, n_bytes=49850,
idle_age=13, priority=10,ip,nw_src=10.1.0.1,nw_dst=192.168.126.177
actions=enqueue:7:123



But I still wonder why It was not working when set QoS for *br3* port?

On Fri, Mar 27, 2015 at 6:46 PM, Gurucharan Shetty <shettyg at nicira.com>
wrote:

> On Thu, Mar 26, 2015 at 6:35 PM, Tim Bagr <timbgr at gmail.com> wrote:
> > Thanks for your reply! It helps a lot. I didn't use tc before. I just
> > watched at large ping packets and if they are delayed progressively -
> then I
> > concluded QoS is actually working.
> >
> > I tried again with patch-port and you're right.
> > # tc class show dev b2p
> > had shown nothing.
> >
> > So I tried to do the trick with LOCAL interface again, which is the same
> > name as name of bridge, in my case "br3".
> >
> > # ovs-vsctl set port br3 qos=@nqos -- --id=@nqos create qos
> type=linux-htb
> > other-config:max-rate=100000 queues=111=@nq -- --id=@nq create queue
> > other-config:max-rate=13000
> >
> > And then, tried to alter max-rate for QoS object:
> > # ovs-vsctl set qos cbba20c1-9afb-4092-ad37-628d70ce2139
> > other-config:max-rate=100000
> > # ovs-vsctl set qos cbba20c1-9afb-4092-ad37-628d70ce2139
> > other-config:max-rate=10000
> > # ovs-vsctl set qos cbba20c1-9afb-4092-ad37-628d70ce2139
> > other-config:max-rate=1000000
> >
> > And after each command I've sent pings from VM to outside network, and
> see
> > if they are limited. The values actually worked - when I set rate 10000,
> for
> > example, then pings from VM looked like:
> > [root at VM ~]# ping -i 0.1 outside.host -s 500
> > PING outside.host (10.1.1.168) 500(528) bytes of data.
> > 508 bytes from outside.host (10.1.1.168): icmp_seq=1 ttl=63 time=0.242 ms
> > 508 bytes from outside.host (10.1.1.168): icmp_seq=2 ttl=63 time=0.270 ms
> > 508 bytes from outside.host (10.1.1.168): icmp_seq=3 ttl=63 time=0.224 ms
> > 508 bytes from outside.host (10.1.1.168): icmp_seq=4 ttl=63 time=67.7 ms
> > 508 bytes from outside.host (10.1.1.168): icmp_seq=5 ttl=63 time=85.7 ms
> > 508 bytes from outside.host (10.1.1.168): icmp_seq=6 ttl=63 time=75.3 ms
> > ^C508 bytes from 10.1.1.168: icmp_seq=7 ttl=63 time=84.8 ms
> >
> > And when I set rate about max-rate=100000000 then pings went flawlessly:
> >
> > [root at localhost ~]# ping -i 0.1 outside.host -s 500
> > PING outside.host (10.1.1.168) 500(528) bytes of data.
> > 508 bytes from outside.host (10.1.1.168): icmp_seq=1 ttl=63 time=0.270 ms
> > 508 bytes from outside.host (10.1.1.168): icmp_seq=2 ttl=63 time=0.201 ms
> > 508 bytes from outside.host (10.1.1.168): icmp_seq=3 ttl=63 time=0.256 ms
> > 508 bytes from outside.host (10.1.1.168): icmp_seq=4 ttl=63 time=0.249 ms
> > 508 bytes from outside.host (10.1.1.168): icmp_seq=5 ttl=63 time=0.203 ms
> > 508 bytes from outside.host (10.1.1.168): icmp_seq=6 ttl=63 time=0.224 ms
> > 508 bytes from outside.host (10.1.1.168): icmp_seq=7 ttl=63 time=0.216 ms
> > 508 bytes from outside.host (10.1.1.168): icmp_seq=8 ttl=63 time=0.217 ms
> > 508 bytes from outside.host (10.1.1.168): icmp_seq=9 ttl=63 time=0.226 ms
> > 508 bytes from outside.host (10.1.1.168): icmp_seq=10 ttl=63 time=0.204
> ms
> > 508 bytes from outside.host (10.1.1.168): icmp_seq=11 ttl=63 time=0.227
> ms
> > 508 bytes from outside.host (10.1.1.168): icmp_seq=12 ttl=63 time=0.220
> ms
> > 508 bytes from outside.host (10.1.1.168): icmp_seq=13 ttl=63 time=0.260
> ms
> >
> >
> >
> > Then I set openflow rules to assign queue number to specific traffic
> (with
> > src and dst address equal to that otside.host, which I send ping requests
> > to):
> >
> > The rules are matched and n_packets grown when I sent pings:
> > # ovs-ofctl dump-flows br3 --sort=priority
> >  cookie=0x0, duration=155295.034s, table=0, n_packets=12692,
> > n_bytes=6188205, priority=0 actions=NORMAL
> >  cookie=0x0, duration=482.039s, table=0, n_packets=112, n_bytes=116704,
> > priority=15,ip,nw_src=10.1.1.168 actions=set_queue:111,NORMAL
> >  cookie=0x0, duration=531.879s, table=0, n_packets=259, n_bytes=269878,
> > priority=15,ip,nw_dst=10.1.1.168 actions=set_queue:111,NORMAL
> >  cookie=0x0, duration=420.512s, table=0, n_packets=640, n_bytes=751480,
> > priority=20,ip,nw_src=10.1.1.168 actions=enqueue:6:111
> >  cookie=0x0, duration=347.242s, table=0, n_packets=573, n_bytes=681666,
> > priority=20,ip,nw_dst=10.1.1.168 actions=enqueue:LOCAL:111
> >
> > But tc command shown that queue (as I suppose it's 1:70, is not active
> > against that traffic):
> > # tc -s class show dev br3
> > class htb 1:1 parent 1:fffe prio 0 rate 12000bit ceil 1000Kbit burst
> 1563b
> > cburst 1564b
> >  Sent 92820 bytes 128 pkt (dropped 0, overlimits 0 requeues 0)
> >  rate 0bit 0pps backlog 0b 0p requeues 0
> >  lended: 8 borrowed: 120 giants: 0
> >  tokens: 15854166 ctokens: 190250
> When you do the above, ping, you can look at the actual kernel
> datapath flows with a 'ovs-dpctl dump-flows'. You will see a flow of
> the following form (my ip addresses are different):
>
>
> recirc_id(0),skb_priority(0),in_port(1),eth(src=00:0c:29:d6:c0:2e,dst=00:0c:29:23:ee:7f),eth_type(0x0800),ipv4(src=
> 192.168.1.1/255.255.255.255,dst=192.168.1.2/0.0.0.0,proto=1/0,tos=0/0,ttl=64/0,frag=no/0xff
> ),
> packets:5, bytes:490, used:0.212s,
> actions:set(skb_priority(0x10070)),2
>
> The set(skb_priority(0x10070)),2 is the actual marking of packets to
> go to queue 70 (decimal 111, I think).
> So my take is that the packet has been marked and sent to output port
> 2 (in my case eth1). But since the queue is only in br3, I think it
> never gets hit.
>
> This is unchartered territory for me. My reading is that QoS can only
> be applied on the egress port.
>
>
>
>
>
> >
> > class htb 1:fffe root rate 1000Kbit ceil 1000Kbit burst 1500b cburst
> 1500b
> >  Sent 92820 bytes 128 pkt (dropped 0, overlimits 0 requeues 0)
> >  rate 0bit 0pps backlog 0b 0p requeues 0
> >  lended: 120 borrowed: 0 giants: 0
> >  tokens: 182250 ctokens: 182250
> >
> > class htb 1:70 parent 1:fffe prio 0 rate 12000bit ceil 13000bit burst
> 1563b
> > cburst 1563b
> >  Sent 0 bytes 0 pkt (dropped 0, overlimits 0 requeues 0)
> >  rate 0bit 0pps backlog 0b 0p requeues 0
> >  lended: 0 borrowed: 0 giants: 0
> >  tokens: 16291666 ctokens: 15038461
> >
> >
> > Maybe it's some undiscovered bug? Or it works so just by design?
> >
> >
> >
> >
> >
> >
> >
> > On Thu, Mar 26, 2015 at 7:44 PM, Gurucharan Shetty <shettyg at nicira.com>
> > wrote:
> >>
> >> On Wed, Mar 25, 2015 at 7:03 PM, Tim Bagr <timbgr at gmail.com> wrote:
> >> > Hello Gurucharan,
> >> >
> >> > Thanks for the reply.
> >> >
> >> > I have no physical ports attached to that OVS bridge. My intention was
> >> > exactly to limit bandwidth for br3 port (i.e. openflow keyword LOCAL),
> >> > and
> >> > another try was to limit bandwidth for internal patch-port, which
> >> > connects
> >> > br3 bridge with br2 bridge. In both bridges there are VMs attached and
> >> > they
> >> > may communicate with each other through the patch-port.
> >> >
> >> > # ovs-vsctl show
> >> > b11e83cb-d741-4a59-90f7-ea9693d508cf
> >> >     Bridge "br2"
> >> >         Port "b3p"
> >> >             Interface "b3p"
> >> >                 type: patch
> >> >                 options: {peer="b2p"}
> >> >         Port "vnet1"
> >> >             Interface "vnet1"
> >> >         Port "br2"
> >> >             Interface "br2"
> >> >                 type: internal
> >> >     Bridge "br3"
> >> >         Port "br3"
> >> >             Interface "br3"
> >> >                 type: internal
> >> >         Port "vnet0"
> >> >             Interface "vnet0"
> >> >         Port "b2p"
> >> >             Interface "b2p"
> >> >                 type: patch
> >> >                 options: {peer="b3p"}
> >> >
> >> > Here vnet1 is port of 1st VM and vnet0 is port of 2nd VM. Now I just
> >> > want to
> >> > limit bandwidth from VM2 to VM1.
> >> > So I run:
> >> > # ovs-vsctl set Port b2p qos=@newq -- --id=@newq create qos
> >> > type=linux-htb
> >> > other-config:max-rate=100000000 queues=111=@q1 -- --id=@q1 create
> queue
> >> > other-config:min-rate=0 other-config:max-rate=10
> >> > 65c5488d-2066-4d97-b21f-ba369a8b2920
> >> > 1e3726b4-12e2-4184-b8e4-13f9c692095f
> >> I _think_(not 100% sure) that you cannot apply QoS on a patch port.
> >> Patch port has no netdevice backing it. OVS uses Linux tc to apply
> >> QoS. Since tc cannot see the OVS patch port, I don't think anything
> >> will come out of it.
> >>
> >> The way to verify that QoS is really working is to look at tc stats
> >> for that queue. e.g: If you had applied QoS on eth1 with 3 queues: 1,
> >> 2, 3, you will get:
> >>
> >> sudo tc class show dev eth1
> >> class htb 1:fffe root rate 900000Kbit ceil 900000Kbit burst 1462b cburst
> >> 1462b
> >> class htb 1:1 parent 1:fffe prio 0 rate 720000Kbit ceil 900000Kbit
> >> burst 1530b cburst 1462b
> >> class htb 1:2 parent 1:fffe prio 0 rate 12000bit ceil 90000Kbit burst
> >> 1563b cburst 1563b
> >> class htb 1:3 parent 1:fffe prio 0 rate 12000bit ceil 90000Kbit burst
> >> 1563b cburst 1563b
> >>
> >> For statistics: (to see how many packets went through etc)
> >> tc -s class show dev eth1
> >>
> >>
> >>
> >> >
> >> > # ovs-ofctl show br3
> >> > OFPT_FEATURES_REPLY (xid=0x2): dpid:00007a42d1ca7e05
> >> > n_tables:254, n_buffers:256
> >> > capabilities: FLOW_STATS TABLE_STATS PORT_STATS QUEUE_STATS
> ARP_MATCH_IP
> >> > actions: OUTPUT SET_VLAN_VID SET_VLAN_PCP STRIP_VLAN SET_DL_SRC
> >> > SET_DL_DST
> >> > SET_NW_SRC SET_NW_DST SET_NW_TOS SET_TP_SRC SET_TP_DST ENQUEUE
> >> >  3(b2p): addr:ae:a4:9d:8b:9f:a9
> >> >      config:     0
> >> >      state:      0
> >> >      speed: 0 Mbps now, 0 Mbps max
> >> >  6(vnet0): addr:fe:54:00:ac:e1:a6
> >> >      config:     0
> >> >      state:      0
> >> >      current:    10MB-FD COPPER
> >> >      speed: 10 Mbps now, 0 Mbps max
> >> >  LOCAL(br3): addr:7a:42:d1:ca:7e:05
> >> >      config:     0
> >> >      state:      0
> >> >      speed: 0 Mbps now, 0 Mbps max
> >> > OFPT_GET_CONFIG_REPLY (xid=0x4): frags=normal miss_send_len=0
> >> >
> >> > # ovs-ofctl add-flow br3 priority=5,in_port=6,actions=enqueue:3:111
> >> > # ovs-ofctl dump-flows br3
> >> > NXST_FLOW reply (xid=0x4):
> >> >  cookie=0x0, duration=3.285s, table=0, n_packets=0, n_bytes=0,
> >> > idle_age=3,
> >> > priority=5,in_port=6 actions=enqueue:3:111
> >> >  cookie=0x0, duration=79191.532s, table=0, n_packets=744,
> >> > n_bytes=431875,
> >> > idle_age=7, hard_age=65534, priority=0 actions=NORMAL
> >> >
> >> > Then I start to ping6 from VM2 to VM1 and pings will go through that
> >> > patch-ports b3p and b2p. As I expected, the QoS queue will be in
> effect.
> >> > And
> >> > limit the bandwidth to very small amount of max-rate=10
> >> How do you know that QOS is in effect above? I guess based on the
> >> n_packets of openflow flow? That only tells that OVS marked the packet
> >> to go to a QoS queue. But if QoS queue has not been created in Linux
> >> kernel, there is really nothing that will happen.
> >>
> >> I imagine that OVS should show some error in the logs, but I did not
> >> see any (during my small test). That is the reason I am not 100% sure
> >> what will actually  happen.
> >
> >
>
-------------- next part --------------
An HTML attachment was scrubbed...
URL: <http://openvswitch.org/pipermail/discuss/attachments/20150331/dbf9cd7c/attachment-0001.html>


More information about the discuss mailing list