[ovs-discuss] OVS performance with Openstack Neutron

Wang, Baoyuan baoyuan.wang at tekcomms.com
Thu Dec 12 15:32:22 UTC 2013


I believe I figured out the answers for my questions. Édouard's test replaced veth pair by OVS internal port between linux bridge and ovs bridge.  

-----Original Message-----
From: Wang, Baoyuan 
Sent: Wednesday, December 11, 2013 9:45 AM
To: 'Édouard Thuleau'; Justin Pettit
Cc: discuss at openvswitch.org discuss; gongysh at unitedstack.com
Subject: RE: [ovs-discuss] OVS performance with Openstack Neutron

Thank you all for the valuable information.  Please educate me on some detail.  This Openstack patch is using type=internal when adding the port. Is there any relationship between type=internal and type=patch from OVS point of view ?  Is "a single datapath model" on OVS the key to this Openstack enhancement ?   Any requirement on specific OVS version for this ?

Regards,
Bao

-----Original Message-----
From: Édouard Thuleau [mailto:thuleau at gmail.com] 
Sent: Wednesday, December 11, 2013 5:15 AM
To: Justin Pettit
Cc: Wang, Baoyuan; discuss at openvswitch.org discuss; gongysh at unitedstack.com
Subject: Re: [ovs-discuss] OVS performance with Openstack Neutron

I made some tests with OpenStack Havana release with ML2 plugin and OVS agent.
Config on Compute node :
# ovs-vsctl -V
ovs-vsctl (Open vSwitch) 2.0.0
# uname -r
3.2.0-41-generic

I start two VM on the same compute node with an interface on a same network segmentation.
By default, a veth pair is used between the Linux bridge (qbr) and the OVS bridge (br-int) for each VM interface.
I set a simple netperf TCP test and I get a throughput of 2Gb/s.

I replay this test after change the veth interfaces by an internal OVS port.
The throughput increase to 13Gb/s.

A patch [1] was already done to use internal instead of a veth.

[1] https://review.openstack.org/#/c/46911/

Édouard.

On Tue, Dec 10, 2013 at 10:15 PM, Justin Pettit <jpettit at nicira.com> wrote:
> On Dec 10, 2013, at 12:17 PM, Wang, Baoyuan <baoyuan.wang at tekcomms.com> wrote:
>
>> Thank you for your response.  I could not find much information about OVS patch port by google search. Most of them are talking about how to configure it.  Do you have any information related to the design/implementation other than reading the code ?
>
> There's not a lot to describe on the implementation.  Before 1.10, if you created two bridges in OVS, two datapaths would be created in the kernel.  The patch port would create a port that you could send traffic to in one datapath and it would pop into the other datapath for processing.  The implementation was very simple--it would just turn the send on one end into a receive on the other.
>
> In 1.10, we went to a single datapath model where regardless of how many bridges were created, they would share a single datapath in the kernel.  With this model, we were able to optimize patch ports by having ovs-vswitchd figure out what would happen in both bridges, and then push down a single flow into the datapath.
>
>> I do have some version of OVS code with me (v1.9.3).
>
> As I mentioned before, the patch port optimization was introduced in 1.10.
>
>> It seems to me that OVS still has to work on multiple flow tables with patch ports.  It might save one loop comparing with veth pair, that is, patch port directly uses the peer to work on peer's flow table instead of going the main processing loop. Please correct me because I am not familiar with the detail OVS design/implementation.  My code research has been spot check. For example, I only checked the files like vport-patch.c and vport.c.   For Telecom industry, those extra processing on every compute nodes for every packet will add up quickly.
>
> The optimization saves an extra lookup in the kernel datapath and an extra trip to userspace to figure out what happens in the second bridge.
>
> --Justin
>
>
> _______________________________________________
> discuss mailing list
> discuss at openvswitch.org
> http://openvswitch.org/mailman/listinfo/discuss





More information about the discuss mailing list