[ovs-discuss] Multiple dpdk-netdev datapath with OVS 2.9
Ben Pfaff
blp at ovn.org
Fri May 4 06:29:04 UTC 2018
It's mostly for historical reasons.
We do try to document in ovs-vswitchd(8) that the user should not manage
datapaths themselves:
ovs-vswitchd does all the necessary management of Open vSwitch
datapaths itself. Thus, external tools, such ovs-dpctl(8), are
not needed for managing datapaths in conjunction with
ovs-vswitchd, and their use to modify datapaths when ovs-vswitchd
is running can interfere with its operation. (ovs-dpctl may
still be useful for diagnostics.)
I guess that the wording should be updated to reflect the "ovs-appctl"
interface too.
I sent a patch to improve the docs here:
https://patchwork.ozlabs.org/patch/908532/
On Thu, May 03, 2018 at 06:44:43PM +0500, alp.arslan at xflowresearch.com wrote:
> If "ovs-vswitchd" manages the data paths, why does it have a utility that
> lets me create more of them. And when I create them I cannot use them. I am
> stuck in a loop :) .
>
> -----Original Message-----
> From: Ben Pfaff [mailto:blp at ovn.org]
> Sent: Thursday, May 3, 2018 4:41 PM
> To: alp.arslan at xflowresearch.com
> Cc: discuss at openvswitch.org
> Subject: Re: [ovs-discuss] Multiple dpdk-netdev datapath with OVS 2.9
>
> On Wed, May 02, 2018 at 10:02:04PM +0500, alp.arslan at xflowresearch.com
> wrote:
> > I am trying to create multiple dpdk-netdev based data paths with OVS
> > 2.9 and DPDK 16.11 running on CentOS 7.4. I am able to create multiple
> > data paths using "ovs-appctl dpctl/add-dp netdev at netdev1" and I can
> > see a new data path created with "ovs-appctl dpctl/show". However I
> > cannot add any interfaces (dpdk or otherwise), and I cannot set this
> > data path as datapath_type to any bridge.
>
> That's not useful or a good idea. ovs-vswitchd manages datapaths itself.
> Adding and removing them yourself will not help.
>
> > Just a precap to why I am trying to do this, I am working with a lot
> > of OVS OpenFlow rules (around 0.5 million) matching layer 3 and layer
> > 4 fields. The incoming traffic is more than 40G (4 x10G Intel x520s),
> > and has multiple parallel flows (over a million IPs). With this the
> > OVS performance decreases and each port is forwarding only around 250
> > Mb/s. I am using multiple RX queues (4-6), with single RX queue it
> > drops to 70 Mb/s. Now if I shutdown three 10G interfaces, an
> > interesting thing happen, and OVS starts forwarding over 7Gb/s for
> > that single interface. That got me thinking, maybe the reason for low
> > performance is 40 G traffic hitting a single bridges flow tables, how
> > about creating multiple bridges with multiple flow tables. With this
> > setup the situation remained same, and now the only common thing
> > between the
> > 4 interfaces is the data path. They are not sharing anything else.
> > They are polled by dedicated vCPUs, and they are in different tables.
> >
> >
> >
> > Can anyone explain this bizarre scenario of why the OVS is able to
> > forward more traffic over single interface polled by 6 vCPUs, compared
> > to 4 interfaces polled by 24 vCPUs.
> >
> > Also is there a way to create multiple data paths and remove this
> > dependency also.
>
> You can create multiple bridges with "ovs-vsctl add-br". OVS doesn't use
> multiple datapaths.
>
> Maybe someone who understands the DPDK port better can suggest some reason
> for the performance characteristics that you see.
>
More information about the discuss
mailing list