[ovs-discuss] Openvswitch on Xen 5.6 disrupting flows on add and delete

Martin Willemsma martin.willemsma at saasplaza.com
Wed Nov 16 17:22:56 UTC 2011


Hello,

Situation:

We are experiencing performance issues with OpenVswitch on Xenserver. We use vswitch to regulate traffic between customer machines on a shared 
platform. A customer can only connect to a machines that 'belong' to this customer. To get the needed information we created a script for internal use 
only.

Script function in a nutshell.

  *

    Get nodes that belong to customer X using a webcall

  *

    add generic rules ( gateways and such )

  *

    add flows src_ -- dst_ accordingly

Nothing fancy just automate the adding / deleting of flows within a datapath. During a regular run some flows get added/ some deleted.  e.g.:

Added 16 flow(s)

Deleted 14 flow(s)


Setup:

Xen server 5.6 Service Pack 2

ovs-vswitchd (Open vSwitch) 1.0.2

OpenFlow versions 0x1:0x1

---

ovs-dpctl show:


system at dp0:

flows: cur:5, soft-max:8192, hard-max:1048576

ports: cur:2, max:1024

groups: max:16

lookups: frags:0, hit:264762685, missed:23143102, lost:52

queues: max-miss:100, max-action:100

port 0: xenbr0 (internal)

port 1: eth0

system at dp1:

flows: cur:255, soft-max:8192, hard-max:1048576

ports: cur:31, max:1024

groups: max:16

lookups: frags:0, hit:1792946153, missed:392494847, lost:1806195

queues: max-miss:100, max-action:100

port 0: xenbr1 (internal)

port 1: eth1

port 2: xapi1 (internal)

port 3: xapi12 (internal)

port 4: vif147.0

port 5: xapi13 (internal)

port 6: vif148.0

port 7: xapi14 (internal)

port 8: vif149.0

port 9: xapi15 (internal)

port 10: xapi16 (internal)

port 11: vif151.0

port 12: xapi26 (internal)

port 13: xapi17 (internal)

port 14: xapi18 (internal)

port 15: vif152.1

port 16: vif150.0

port 17: xapi19 (internal)

port 18: xapi27 (internal)

port 19: xapi33 (internal)

port 20: xapi28 (internal)

port 21: xapi34 (internal)

port 22: xapi35 (internal)

port 23: xapi40 (internal)

port 24: xapi41 (internal)

port 25: xapi42 (internal)

port 26: xapi43 (internal)

port 27: xapi44 (internal)

port 28: xapi45 (internal)

port 29: xapi46 (internal)

port 30: vif153.0

system at dp2:

flows: cur:19, soft-max:1024, hard-max:1048576

ports: cur:2, max:1024

groups: max:16

lookups: frags:0, hit:1667986286, missed:8951406, lost:13219

queues: max-miss:100, max-action:100

port 0: xenbr2 (internal)

port 1: eth2

system at dp3:

flows: cur:6, soft-max:1024, hard-max:1048576

ports: cur:2, max:1024

groups: max:16

lookups: frags:0, hit:7368959, missed:8706606, lost:13304

queues: max-miss:100, max-action:100

port 0: xenbr3 (internal)

port 1: eth3


Issues:

During adding or deleting rules in datapath:


  *

    packets are being dropped

  * 2 noticeable high pings
  *

    Flows in datapath are being dropped (it seems)

  *

    1 min avg load on the Xenserver is high 3-5. ovs-vswitchd 101% CPU


missed status: increases

eventually lost: increases

Current flows in datapath decreasing


ovs-dpctl show system at dp1

system at dp1:

flows: cur:306, soft-max:65536, hard-max:1048576

ports: cur:30, max:1024

groups: max:16

lookups: frags:0, hit:992564, missed:795173, lost:532618

queues: max-miss:100, max-action:100



Reproduce:

Generate flows by issueing a hping to random nodes on the Xen host


hping3 -S -L 0 -p 3389 --fast <node1>

hping3 -S -L 0 -p 3389 --fast <node2>

hping3 -S -L 0 -p 3389 --fast <node3>

hping3 -S -L 0 -p 3389 --fast <node4>


dump flows in system at dp1 <mailto:system at dp1>


while true ; do ovs-dpctl dump-flows system at dp1 | grep <source host> | wc -l ; sleep 0.5 ; done

IDLE:

19

19

19


Flows in system at dp1 <mailto:system at dp1>: while adding / removing rules:

367

337

367

343

319

355

60 ? drops

90 ? drops

126 ? drops

156 ? drops

186 ? drops

216

246

282

^C


ON HOST show datapaths :

# while true; do ovs-dpctl show | egrep 'system|flows|missed:|lost:'; sleep 1 ; echo " " ; done

system at dp0:

flows: cur:12, soft-max:8192, hard-max:1048576

lookups: frags:0, hit:110947, missed:55395, lost:0

system at dp1:

flows: cur:723, soft-max:65536, hard-max:1048576

lookups: frags:0, hit:1032970, missed:827870, lost:532618

system at dp2:

flows: cur:4, soft-max:1024, hard-max:1048576

lookups: frags:0, hit:3110, missed:4807, lost:0

system at dp3:

flows: cur:26, soft-max:1024, hard-max:1048576

lookups: frags:0, hit:2785444, missed:5148, lost:0


Anyone seen this behavior before ? Any hints on how to solve this problem ?


---
Kind regards,

Martin Willemsma


-- 
This e-mail is intended exclusively for the addressee(s), and may not be passed on to, or made available for use by any person other than the addressee(s). SaaSplaza rules out any and every liability resulting from any electronic transmission.


-------------- next part --------------
An HTML attachment was scrubbed...
URL: <http://openvswitch.org/pipermail/ovs-discuss/attachments/20111116/e60e9088/attachment.html>
-------------- next part --------------
A non-text attachment was scrubbed...
Name: smime.p7s
Type: application/pkcs7-signature
Size: 4301 bytes
Desc: S/MIME Cryptographic Signature
URL: <http://openvswitch.org/pipermail/ovs-discuss/attachments/20111116/e60e9088/attachment.p7s>


More information about the discuss mailing list