[ovs-discuss] OVS-DPDK - Very poor performance when connected to namespace/container

Mooney, Sean K sean.k.mooney at intel.com
Thu Jun 15 11:32:49 UTC 2017



> -----Original Message-----
> From: Avi Cohen (A) [mailto:avi.cohen at huawei.com]
> Sent: Thursday, June 15, 2017 9:50 AM
> To: Mooney, Sean K <sean.k.mooney at intel.com>; dpdk-ovs at lists.01.org;
> users at dpdk.org; ovs-discuss at openvswitch.org
> Subject: RE: OVS-DPDK - Very poor performance when connected to
> namespace/container
> 
> 
> 
> > -----Original Message-----
> > From: Mooney, Sean K [mailto:sean.k.mooney at intel.com]
> > Sent: Thursday, 15 June, 2017 11:24 AM
> > To: Avi Cohen (A); dpdk-ovs at lists.01.org; users at dpdk.org; ovs-
> > discuss at openvswitch.org
> > Cc: Mooney, Sean K
> > Subject: RE: OVS-DPDK - Very poor performance when connected to
> > namespace/container
> >
> >
> >
> > > -----Original Message-----
> > > From: Dpdk-ovs [mailto:dpdk-ovs-bounces at lists.01.org] On Behalf Of
> > > Avi Cohen (A)
> > > Sent: Thursday, June 15, 2017 8:14 AM
> > > To: dpdk-ovs at lists.01.org; users at dpdk.org;
> > > ovs-discuss at openvswitch.org
> > > Subject: [Dpdk-ovs] OVS-DPDK - Very poor performance when connected
> > > to namespace/container
> > >
> > > Hello   All,
> > > I have OVS-DPDK connected to a namespace via veth pair device.
> > >
> > > I've got a very poor performance - compared to normal OVS (i.e. no
> > > DPDK).
> > > For example - TCP jumbo pkts throughput: normal OVS  ~ 10Gbps ,
> OVS-
> > > DPDK 1.7 Gbps.
> > >
> > > This can be explained as follows:
> > > veth is implemented in kernel - in OVS-DPDK data is transferred
> from
> > > veth to user space while in normal OVS we save this transfer
> > [Mooney, Sean K] that is part of the reason, the other reson this is
> > slow and The main limiter to scalling adding veth pairs or ovs
> > internal port to ovs with dpdk is That these linux kernel ports are
> > not processed by the dpdk pmds. They are server by the Ovs-vswitchd
> > main thread via a fall back to the non dpdk acclarated netdev
> implementation.
> > >
> > > Is there any other device to connect to namespace ? something like
> > > vhost-user ? I understand that vhost-user cannot be used for
> > > namespace
> > [Mooney, Sean K] I have been doing some experiments in this regard.
> > You should be able to use the tap, pcap or afpacket pmd to add a vedv
> > that will improve Performance. I have seen some strange issue with
> the
> > tap pmd that cause packet to be drop By the kernel on tx on some
> ports
> > but not others so there may be issues with that dirver.
> >
> > Previous experiment with libpcap seemed to work well with ovs 2.5 but
> > I have not tried it With ovs 2.7/master since the introduction of
> > generic vdev support at runtime. Previously vdevs And to be allocated
> using the dpdk args.
> >
> > I would try following the af_packet example here
> >
> https://github.com/openvswitch/ovs/blob/b132189d8456f38f3ee139f126d680
> > 9 01a9ee9a8/Documentation/howto/dpdk.rst#vdev-support
> >
> [Avi Cohen (A)]
> Thank you Mooney, Sean K
> I already tried to connect the namespace with a tap device (see 1 & 2
> below)  - and got the worst performance  for some reason the packet  is
> cut to default MTU inside the  OVS-DPDK which transmit the packet to
> its peer. - although all interfaces MTU were set to 9000.
> 
>  1. ovs-vsctl add-port $BRIDGE tap1 -- set Interface tap1 type=internal
> 
>  2. ip link set tap1 netns ns1 // attach it to namespace
[Mooney, Sean K] this is not using the dpdk tap pmd , internal port and veth ports 
If added to ovs will not be accelerated by dpdk unless you use a vdev to attach them.
> 
> I'm looking at your link to create a virtual PMD with vdev support - I
> see there a creation of a virtual PMD device , but I'm not sure how
> this is connected to the namespace ?  what device should I assign to
> the namespace ?
[Mooney, Sean K] 
You would use it as follows

ip tuntap add dev tap1 mode tap

ovs-vsctl add-port br0 tap1 -- set Interface tap1 type=dpdk \
options:dpdk-devargs=eth_af_packet0,iface=tap1

ip link set tap1 netns ns1

ip netns exec ns1 ifconfig 192.168.1.1/24 up

in general though if you are using ovs-dpdk you should avoid using network namespace and
the kernel where possible but the above should improve you performance. One caveat, the amount
of vdev+phyical interfaces is limited by how dpdk is compiled by default to 32 devices but it can be increased
to 256 if required.

> 
> Best Regards
> avi
> 
> > if you happen to be investigating this for use with openstack routers
> > we Are currently working on a way to remove the use of namespace
> > entirely for dvr when using The default neutron agent and sdn
> > controllers such as ovn already provide that functionality.
> > >
> > > Best Regards
> > > avi
> > > _______________________________________________
> > > Dpdk-ovs mailing list
> > > Dpdk-ovs at lists.01.org
> > > https://lists.01.org/mailman/listinfo/dpdk-ovs


More information about the discuss mailing list