[ovs-dev] [RFC PATCH] netdev-dpdk: Integrate vHost User PMD
ciara.loftus at intel.com
Fri Jun 1 13:40:31 UTC 2018
> > On Mon, May 21, 2018 at 04:44:13PM +0100, Ciara Loftus wrote:
> > > The vHost PMD brings vHost User port types ('dpdkvhostuser' and
> > > 'dpdkvhostuserclient') under control of DPDK's librte_ether API, like
> > > all other DPDK netdev types ('dpdk' and 'dpdkr'). In doing so, direct
> > > calls to DPDK's librte_vhost library are removed and replaced with
> > > librte_ether API calls, for which most of the infrastructure is
> > > already in place.
> > >
> > > This change has a number of benefits, including:
> > > * Reduced codebase (~200LOC removed)
> > > * More features automatically enabled for vHost ports eg. custom stats
> > > and additional get_status information.
> > > * OVS can be ignorant to changes in the librte_vhost API between DPDK
> > > releases potentially making upgrades easier and the OVS codebase less
> > > susceptible to change.
> > >
> > > The sum of all DPDK port types must not exceed RTE_MAX_ETHPORTS
> > > is set and can be modified in the DPDK configuration. Prior to this
> > > patch this only applied to 'dpdk' and 'dpdkr' ports, but now applies
> > > to all DPDK port types including vHost User.
> > >
> > > Performance (pps) of the different topologies p2p, pvp, pvvp and vv
> > > has been measured to remain within a +/- 5% margin of existing
> > performance.
> > Thanks for putting this together.
> > I think when this idea was discussed at least in my head we would pretty
> > much kill any vhost specific info and use a standard eth API instead.
> > However, it doesn't look like to be case, we still have the mtu and queue
> > issues, special construct/destruct, send, and etc which IMHO defeats the
> > initial goal.
> I agree, I think that would be the ideal situation but it seems where not there
> I wonder if that is something that could be changed and fed back to DPDK? If
> we will always have to have the separate implementations is that reflective
> of OVS requirements or a gap in DPDK implementation of vhost PMD?
Hi Ian & Flavio,
Thank you both for your responses. I agree, right now we are not at the ideal scenario using this API which would probably be closer to having the netdev_dpdk and netdev_dpdk_vhost* classes equivalent. However 4 functions have become common (get_ carrier, stats, custom_stats, features) and many of the remainder have some element of commonality through helper functions (send, receive, status, etc.). The hope would be that going forward we could narrow the gap through both OVS and DPDK changes. I think it would be difficult to narrow that gap if we opt for an "all or nothing" approach now.
> > Leaving that aside for a moment, I wonder about imposed limitations if we
> > switch to the eth API too. I mean, things that we can do today because OVS
> > is managing vhost that we won't be able after the API switch.
> I've been thinking of this situation also. But one concern is by not using the
> vhost PMD will there be features that are unavailable to vhost in ovs?
> Nothing comes to mind for now, and as long as we continue to access DPDK
> vhost library that should be ok. However it's something we should keep an
> eye in the future (For example we recently had an example of a DPDK
> function that could not be used with DPDK compiled for shared libs).
> It would be interesting to see where the DPDK community are trending
> towards with vhost development in the future WRT this.
Feature-wise, it appears the development of any new DPDK vHost feature includes relevant support for that feature in the PMD. Dequeue zero copy and vhost iommu are examples of these. So going forward I don't see any issues there.
In my development and testing of this patch I haven't come across any limitations other than that out-of-the-box one is limited to a maximum of 32 vHost ports as defined by RTE_MAX_ETHPORTS. I would be interested to hear if that would be a concern for users.
On the other hand there are many more new things we can do with the API switch too eg. more information in get_status & custom statistics and hopefully more going forward. Although understand preserving existing functionality is critical.
Understand this is a large patch and might take some time to review. But would definitely welcome any further high level feedback especially around the topics above from anybody in the community interested in netdev-dpdk/vHost.
> > Thanks,
> > fbl
> > _______________________________________________
> > dev mailing list
> > dev at openvswitch.org
> > https://mail.openvswitch.org/mailman/listinfo/ovs-dev
More information about the dev