[ovs-discuss] Questions about some performance issues / bandwitch limitation inside openvswitch

Benoit ML ben42ml at gmail.com
Fri Jan 21 10:30:50 UTC 2011


Hello,

I'm currently working on OpenVswitch.
Yours goals are to use it like a big virtual switchs for hundreds of VM.

Fonctionnally it's perferk, works great. Nice peace of works ! :)

But I've got some performance issue, and I have some questions about that ;)

Well I use RHEL6/KVM and openvswitch 1.0.3.
There is a central server with openVswitch and many openVswitch connected to
it with GRE tunnel.


I've run many tests and the results are quite interesting :

1/In standalone mode (ie VMs on the same hypervisor) :
========================================================================================
Globally I have a throught put of 80% of the main network cards :
- with a 10Gbit card, I've got 8Gbit  max bandwitch
- with a 1Gbit card, I've got 800Mbit max nadwitch

I've run many tests (kernel optimisation, less or more VMs ,etc...) and the
results is pretty identical : ~80% of main network card.
Well, that's not bad at all, but is there a hard limitation in the code of
openvswitch ? an auto-probe of max bandwith ?

In the documentation I've found about qos/max_rate :
-------------------------------------------------------------------
Maximum rate shared by all queued traffic, in bit/s. Optional. If not
specified, for physi-
cal interfaces, the default is the link rate. For other interfaces or if the
link rate cannot be
determined, the default is currently 100 Mbp
-------------------------------------------------------------------

OpenVswitch seems to use something call "ETHTOOL_GSET" when a port/interface
is addeed.
>From my investigation, it seems that this fonction interrogate the link
speed of the device.
Is there a king of relation ?

But it's strange, because when i interrogate with ethtool a tap interface
I've got :  "Speed: 10Mb/s".
I don't know how to change this parameter because ethtool can't do that
:"Operation not supported"
In pratice the VM throughput can go up than 10Mb so I'm suposed taht
openvswitch doesn't care.


2/With GRE/CAPWAP tunnel : VMs accroos many hypervisors
========================================================================================
In this configuration VM throughput does'nt exceed about 100/120 Mbit/s.
If I've run lots of VM, I can saturate the link (10Gbit), but I must run
lot's of VM.
I've done some tests with capwap tunnel, and the results are pretty the
same.

Is there a bandwitch limitation inside openvswitch when using a GRE/CAPWAP
Tunnel ?



Wall in fine If you need more details, ask them ;)
Any help is welcome :)


Regards,

--
Benoit
-------------- next part --------------
An HTML attachment was scrubbed...
URL: <http://openvswitch.org/pipermail/ovs-discuss/attachments/20110121/bb7bf5dd/attachment-0001.html>


More information about the discuss mailing list