[ovs-dev] [PATCH v4 6/6] system-dpdk: Connect network namespaces via dpdkvhostuser ports

Bala Sankaran bsankara at redhat.com
Fri Sep 14 14:33:22 UTC 2018


----- Original Message -----
> From: "Tiago Lam" <tiago.lam at intel.com>
> To: "Bala Sankaran" <bsankara at redhat.com>
> Cc: dev at openvswitch.org, "Aaron Conole" <aconole at redhat.com>, "Ian Stokes" <ian.stokes at intel.com>, "Ciara Loftus"
> <ciara.loftus at intel.com>, "anatoly burakov" <anatoly.burakov at intel.com>
> Sent: Tuesday, 11 September, 2018 6:56:00 PM
> Subject: Re: [PATCH v4 6/6] system-dpdk: Connect network namespaces via dpdkvhostuser ports
> 
> On 10/09/2018 16:47, Bala Sankaran wrote:
> > Hello Tiago,
> > 
> > Here's an incremental diff of the patch 6 which I tested. It looks to pass
> > the tests. I would be submitting the
> > new version (v5) applying these changes:
> >
> > diff --git a/tests/system-dpdk.at b/tests/system-dpdk.at
> > index baa74da..a9247f8 100644
> > --- a/tests/system-dpdk.at
> > +++ b/tests/system-dpdk.at
> > @@ -88,6 +88,10 @@ OVS_DPDK_PRE_CHECK()
> >  AT_SKIP_IF([! which testpmd >/dev/null 2>/dev/null])
> >  OVS_DPDK_START()
> >  
> > +dnl Find number of sockets
> > +AT_CHECK([lscpu], [], [stdout])
> > +AT_CHECK([cat stdout | grep "Socket(s)" | awk '{c=1; while (c++<$(3))
> > {printf "512,"}; print "512"}' > SOCKET_MEM])
> 
> Hi Bala,
Hello Tiago,

> 
> Thanks for the incremental.
> 
> Any specific reason to use the result of "Socket(s)" here? I'd use the
> same "NUMA node(s)" instead, as the OVS_DPDK_START is doing (in both
> cases the result is going to be passed to the "--socket-mem" option).
> 
> Also, I think it would be preferable if you'd use a different file to
> store the information, instead of overriding the same SOCKET_MEM file
> that OVS_DPDK_START sets.

I have submitted a v5 of the patches, including these changes you specified.

> 
> > +
> >  dnl Add userspace bridge and attach it to OVS
> >  AT_CHECK([ovs-vsctl add-br br10 -- set bridge br10 datapath_type=netdev])
> >  AT_CHECK([ovs-vsctl add-port br10 dpdkvhostuser0 -- set Interface
> >  dpdkvhostuser0 \
> > @@ -111,7 +115,7 @@ ADD_VETH(tap1, ns2, br10, "172.31.110.12/24")
> >  
> >  dnl Execute testpmd in background
> >  on_exit "pkill -f -x -9 'tail -f /dev/null'"
> > -tail -f /dev/null | testpmd --socket-mem=512 \
> > +tail -f /dev/null | testpmd --socket-mem="$(cat SOCKET_MEM)" --no-pci\
> >             --vdev="net_virtio_user,path=$OVS_RUNDIR/dpdkvhostuser0" \
> >             --vdev="net_tap0,iface=tap0" --file-prefix page0 \
> >             --single-file-segments -- -a
> >             >$OVS_RUNDIR/testpmd-dpdkvhostuser0.log 2>&1 &
> > @@ -183,7 +187,7 @@ ADD_VETH(tap1, ns2, br10, "172.31.110.12/24")
> >  
> >  dnl Execute testpmd in background
> >  on_exit "pkill -f -x -9 'tail -f /dev/null'"
> > -tail -f /dev/null | testpmd --socket-mem=512 \
> > +tail -f /dev/null | testpmd --socket-mem="$(cat SOCKET_MEM)" --no-pci\
> >             --vdev="net_virtio_user,path=$OVS_RUNDIR/dpdkvhostclient0,server=1"
> >             \
> >             --vdev="net_tap0,iface=tap0" --file-prefix page0 \
> >             --single-file-segments -- -a
> >             >$OVS_RUNDIR/testpmd-dpdkvhostuserclient0.log 2>&1 &
> > 
> > 
> > Before I do so, I had a question for you.
> > 
> > Do you suggest that we have the socket-mem option at all? Because it could
> > vary for each system, and also, the latest
> > versions of DPDK would dynamically allocate the socket-memory, but earlier
> > versions might have had the socket-memory
> > option hard-coded - which could in turn be a different value in each system
> > that the tests run on.
> 
> I had the same question myself, because it would make a lot more sense
> to just drop it (the "--socket-mem" option, that is). However, it
> doesn't seem to work either. The only way I could get it to work on my
> system was to provide an argument to all available nodes (I'm not sure
> this is the intended behavior though, from DPDK 18.05+, it would require
> further investigation). Does it work on your end, if running on a system
> with multiple NUMA nodes?

The tests pass on a single NUMA node system, however, either due to a problem with my 
configurations on the multi-NUMA node system or with the testpmd application, the 
tests do not pass on that system. Inspecting the testpmd-dpdkvhostuser0.log file,
I find a, "Creation of mbuf pool for socket 0 failed: Cannot allocate memory"
error.

Can you test it on a multi-NUMA node system that you have and let me know if the
tests pass at your end?

Thanks,
Bala.
 
> 
> Hope this helps,
> Tiago.
> 


More information about the dev mailing list