[ovs-dev] [PATCH v2 0/2] vHost Dequeue Zero Copy

Stokes, Ian ian.stokes at intel.com
Sun Oct 15 13:55:56 UTC 2017


> >
> > Hi Ciara,
> >
> > These improvements look very good. I would expect even bigger
> > improvements for big packets, as long as we don't hit some link
> > bandwidth limitations. But at least the vhost-vhost cases should
> benefit.
> >
> > Have you also tested larger packet sizes?
> 
> Hi Jan,
> 
> Thanks for the feedback. Here are some more datapoints for the VM2VM
> topology:
> 
> 256B:	4.69 vs 5.42 Mpps (+~16%)
> 512B:	4.04 vs 4.90 Mpps (+~21%)
> 1518B:	2.51 vs 3.05 Mpps (+~22%)

Hi Ciara,

Thanks for the patchset, in testing I'm seeing similar numbers for oth the vm to vm use case and the vm to phy case although for the vm to phy case this can change based on the descriptors, more comments with regards to that in the patch itself.

Ian

> 
> As you guessed, I hit bandwidth when using a NIC & larger packet sizes so
> can't show any benefit.
> 
> >
> > I plan to review your patches.
> 
> Much appreciated.
> 
> Thanks,
> Ciara
> 
> >
> > Thanks, Jan
> >
> > > -----Original Message-----
> > > From: ovs-dev-bounces at openvswitch.org [mailto:ovs-dev-
> > bounces at openvswitch.org] On Behalf Of Ciara Loftus
> > > Sent: Wednesday, 11 October, 2017 16:22
> > > To: dev at openvswitch.org
> > > Subject: [ovs-dev] [PATCH v2 0/2] vHost Dequeue Zero Copy
> > >
> > > This patch enables optional dequeue zero copy for vHost ports.
> > > This gives a performance increase for some use cases. I'm using the
> > > cover letter to report my results.
> > >
> > > vhost (vm1) -> vhost (vm2)
> > > Using testpmd to source (txonly) in vm1 and sink (rxonly) in vm2.
> > > 4C1Q 64B packets: 5.05Mpps -> 5.52Mpps = 9.2% improvement
> > >
> > > vhost (virtio_user backend 1) -> vhost (virtio_user backend 2) Using
> > > 2 instances of testpmd, each with a virtio_user backend connected to
> > > one of the two vhost ports created in OVS.
> > > 2C1Q 1518B packets: 2.59Mpps -> 3.09Mpps = 19.3% improvement
> > >
> > > vhost -> phy
> > > Using testpmd to source (txonly) and sink in the NIC 1C1Q 64B
> > > packets: 6.81Mpps -> 7.76Mpps = 13.9% improvement
> > >
> > > phy -> vhost -> phy
> > > No improvement measured
> > >
> > > This patch is dependent on the series below which fixes issues with
> > > mempool management:
> > > https://patchwork.ozlabs.org/patch/822590/
> > >
> > > v2 changes:
> > > * Mention feature is disabled by default in the documentation
> > > * Add PHY-VM-PHY with vHost dequeue zero copy documentation guide
> > > * Line wrap link to DPDK documentation
> > > * Rename zc_enabled to dq_zc_enabled for future-proofing
> > > * Mention feature is available for both vHost port types in the docs
> > > * In practise, rebooting the VM doesn't always enable the feature if
> > > enabled post-boot, so update the documentation to suggest a shutdown
> > > rather than a reboot. The reason why this doesn't work is probably
> > > because the total downtime during reboot isn't enough to allow a
> > > vhost device unregister & re-register with the new feature, so when
> > > the VM starts again it doesn't pick up the new device as it hasn't
> > > been re-registered in time.
> > >
> > > Ciara Loftus (2):
> > >   netdev-dpdk: Helper function for vHost device setup
> > >   netdev-dpdk: Enable optional dequeue zero copy for vHost User
> > >
> > >  Documentation/howto/dpdk.rst             |  29 +++++
> > >  Documentation/topics/dpdk/vhost-user.rst |  35 ++++++
> > >  NEWS                                     |   3 +
> > >  lib/netdev-dpdk.c                        | 202 +++++++++++++++++++++-
> ---------
> > >  vswitchd/vswitch.xml                     |  11 ++
> > >  5 files changed, 218 insertions(+), 62 deletions(-)
> > >
> > > --
> > > 2.7.5
> > >
> > > _______________________________________________
> > > dev mailing list
> > > dev at openvswitch.org
> > > https://mail.openvswitch.org/mailman/listinfo/ovs-dev
> _______________________________________________
> dev mailing list
> dev at openvswitch.org
> https://mail.openvswitch.org/mailman/listinfo/ovs-dev


More information about the dev mailing list