[ovs-discuss] OVS: Vlan performance on XenServer6

Justin Pettit jpettit at nicira.com
Tue Nov 15 18:26:58 UTC 2011


It was pointed out to me that I mistakenly mentioned XenServer 5.6--I meant XenServer 6.

--Justin


On Nov 15, 2011, at 2:09 AM, Justin Pettit wrote:

> Hi, Giuseppe.  Thanks for confirming that it's fixed in later builds.  The problem was that all the VLAN packets were going up to userspace.  This is caused by a bug that was fixed in the following commit:
> 
> 	http://openvswitch.org/cgi-bin/gitweb.cgi?p=openvswitch;a=commit;h=372865
> 
> Citrix has confirmed that it will be included in the first Boston (XenServer 5.6) maintenance release.
> 
> --Justin
> 
> 
> On Nov 14, 2011, at 2:59 AM, Giuseppe Civitella wrote:
> 
>> Just in case it can help someone else, I upgraded openvswitch to the
>> latest version and the problem vanished:
>> ------------------------------------------------------------
>> Client connecting to 10.12.0.3, TCP port 5001
>> TCP window size: 16.0 KByte (default)
>> ------------------------------------------------------------
>> [  3] local 10.12.0.4 port 34996 connected with 10.12.0.3 port 5001
>> [ ID] Interval       Transfer     Bandwidth
>> [  3]  0.0-10.0 sec  1.10 GBytes    941 Mbits/sec
>> 
>> ethernet driver details:
>> [root at xen1 ~]# ethtool -i eth0
>> driver: bnx2
>> version: 2.0.24b
>> firmware-version: 6.2.15 bc 5.2.3 NCSI 2.0.11
>> bus-info: 0000:01:00.0
>> 
>> Best regards,
>> Giuseppe
>> 
>> 
>> 
>> 2011/11/10 Giuseppe Civitella <giuseppe.civitella at gmail.com>:
>>> Hi all,
>>> 
>>> I've got an openstack setup using XenServer6 as hypervisor platform.
>>> Each XS6 server has 2 bond, one for management traffic and the other
>>> for VM traffic:
>>> 
>>> [root at xen1 ~]# ovs-appctl bond/list
>>> bridge  bond    type    slaves
>>> xapi2   bond1   balance-slb     eth3, eth2
>>> xapi1   bond0   balance-slb     eth1, eth0
>>> 
>>> while management traffic on bond1 does not  require vlans, the virtual
>>> machine's traffic on bond0 does.
>>> If I measure traffic rate between XS6's hosts and network host on
>>> management lan the result is near gigabit as expected:
>>> ------------------------------------------------------------
>>> Client connecting to 10.1.1.1, TCP port 5001
>>> TCP window size: 16.0 KByte (default)
>>> ------------------------------------------------------------
>>> [  3] local 10.1.1.34 port 56030 connected with 10.1.1.1 port 5001
>>> [ ID] Interval       Transfer     Bandwidth
>>> [  3]  0.0-10.0 sec  1.10 GBytes   943 Mbits/sec
>>> 
>>> Even if I do the same test from a virtual machine to a phisical host
>>> on the same DMZ I can get similar results.
>>> If I check inter vms traffic rate the result is much different.
>>> Using iperf on virtual machines running on the same vlan and different
>>> XS6 hosts I got:
>>> ------------------------------------------------------------
>>> Client connecting to 10.12.0.10, TCP port 5001
>>> TCP window size: 16.0 KByte (default)
>>> ------------------------------------------------------------
>>> [  3] local 10.12.0.8 port 47465 connected with 10.12.0.10 port 5001
>>> [ ID] Interval       Transfer     Bandwidth
>>> [  3]  0.0-10.0 sec    200 MBytes    168 Mbits/sec
>>> 
>>> Openvswitch version on all the hosts is:
>>> ovs-vswitchd (Open vSwitch) 1.0.99
>>> Compiled Aug  2 2011 11:50:44
>>> OpenFlow versions 0x1:0x1
>>> 
>>> 
>>> Do someone has any suggestion to address this problem?
>>> 
>>> Thanks a lot
>>> Giuseppe
>>> 
>> _______________________________________________
>> discuss mailing list
>> discuss at openvswitch.org
>> http://openvswitch.org/mailman/listinfo/discuss
> 




More information about the discuss mailing list