[ovs-discuss] Questions about some performance issues / bandwitch limitation inside openvswitch

Benoit ML ben42ml at gmail.com
Mon Jan 24 12:55:54 UTC 2011


Hi,

Thank you for your reply.

In bridge mode performance is good but I don't have done deep testing ...

Well you pointed me the mtu and effectively, it seems that I have got issue
about that.
It seems that the mtu of the gre tunnel is 1524. Can you confirm that ?
Well have you some suggestion for the guest MTU ? and for the KVM tap device
?


Thank in advance.



2011/1/21 Jesse Gross <jesse at nicira.com>

> On Fri, Jan 21, 2011 at 2:30 AM, Benoit ML <ben42ml at gmail.com> wrote:
> > Hello,
> > I'm currently working on OpenVswitch.
> > Yours goals are to use it like a big virtual switchs for hundreds of VM.
> > Fonctionnally it's perferk, works great. Nice peace of works ! :)
> > But I've got some performance issue, and I have some questions about that
> ;)
> > Well I use RHEL6/KVM and openvswitch 1.0.3.
> > There is a central server with openVswitch and many openVswitch connected
> to
> > it with GRE tunnel.
> >
> > I've run many tests and the results are quite interesting :
> > 1/In standalone mode (ie VMs on the same hypervisor) :
> >
> ========================================================================================
> > Globally I have a throught put of 80% of the main network cards :
> > - with a 10Gbit card, I've got 8Gbit  max bandwitch
> > - with a 1Gbit card, I've got 800Mbit max nadwitch
> > I've run many tests (kernel optimisation, less or more VMs ,etc...) and
> the
> > results is pretty identical : ~80% of main network card.
> > Well, that's not bad at all, but is there a hard limitation in the code
> of
> > openvswitch ? an auto-probe of max bandwith ?
>
> No, there's no probing of bandwidth.
>
> What does the performance look like in your setup without Open
> vSwitch?  Open vSwitch can easily switch 10Gbps of traffic on modern
> hardware.  However, other components of the system (such as the
> hypervisor) can add significant overhead.
>
> > In the documentation I've found about qos/max_rate :
> > -------------------------------------------------------------------
> > Maximum rate shared by all queued traffic, in bit/s. Optional. If not
> > specified, for physi-
> > cal interfaces, the default is the link rate. For other interfaces or if
> the
> > link rate cannot be
> > determined, the default is currently 100 Mbp
> > -------------------------------------------------------------------
>
> If you have no enabled QoS then this should not affect anything.
>
> > OpenVswitch seems to use something call "ETHTOOL_GSET" when a
> port/interface
> > is addeed.
> > From my investigation, it seems that this fonction interrogate the link
> > speed of the device.
> > Is there a king of relation ?
> > But it's strange, because when i interrogate with ethtool a tap interface
> > I've got :  "Speed: 10Mb/s".
> > I don't know how to change this parameter because ethtool can't do that
> > :"Operation not supported"
> > In pratice the VM throughput can go up than 10Mb so I'm suposed taht
> > openvswitch doesn't care.
>
> None of this has an effect on performance.
>
> >
> > 2/With GRE/CAPWAP tunnel : VMs accroos many hypervisors
> >
> ========================================================================================
> > In this configuration VM throughput does'nt exceed about 100/120 Mbit/s.
> > If I've run lots of VM, I can saturate the link (10Gbit), but I must run
> > lot's of VM.
> > I've done some tests with capwap tunnel, and the results are pretty the
> > same.
> > Is there a bandwitch limitation inside openvswitch when using a
> GRE/CAPWAP
> > Tunnel ?
>
> Tunneling performance is also capable of significantly higher
> performance than this.  Perhaps you have fragmentation or other issues
> on your network.
>
-------------- next part --------------
An HTML attachment was scrubbed...
URL: <http://openvswitch.org/pipermail/ovs-discuss/attachments/20110124/f41e36c8/attachment-0001.html>


More information about the discuss mailing list