[ovs-dev] [PATCH] netdev-dpdk: Add support for DPDK 16.11

Loftus, Ciara ciara.loftus at intel.com
Thu Nov 24 16:11:17 UTC 2016


> 
> 
> On 11/24/2016 04:34 PM, Loftus, Ciara wrote:
> >>
> >>
> >> On 11/24/2016 03:59 PM, Loftus, Ciara wrote:
> >>>>
> >>>> On 11/24/2016 03:20 PM, Ciara Loftus wrote:
> >>>>> This commit announces support for DPDK 16.11. Compaitibilty with
> DPDK
> >>>>> v16.07 is not broken yet thanks to only minor code changes being
> >> needed
> >>>>> for the upgrade.
> >>>>>
> >>>>> Signed-off-by: Ciara Loftus <ciara.loftus at intel.com>
> >>>>> ---
> >>>>>  .travis/linux-build.sh    |  2 +-
> >>>>>  INSTALL.DPDK-ADVANCED.rst |  4 ++--
> >>>>>  INSTALL.DPDK.rst          | 18 +++++++++---------
> >>>>>  NEWS                      |  1 +
> >>>>>  lib/netdev-dpdk.c         |  3 ++-
> >>>>>  5 files changed, 15 insertions(+), 13 deletions(-)
> >>>> <snip />
> >>>>> diff --git a/lib/netdev-dpdk.c b/lib/netdev-dpdk.c
> >>>>> index de78ddd..7564ad7 100644
> >>>>> --- a/lib/netdev-dpdk.c
> >>>>> +++ b/lib/netdev-dpdk.c
> >>>>> @@ -2593,7 +2593,8 @@ netdev_dpdk_vhost_class_init(void)
> >>>>>          rte_vhost_driver_callback_register(&virtio_net_device_ops);
> >>>>>          rte_vhost_feature_disable(1ULL << VIRTIO_NET_F_HOST_TSO4
> >>>>>                                    | 1ULL << VIRTIO_NET_F_HOST_TSO6
> >>>>> -                                  | 1ULL << VIRTIO_NET_F_CSUM);
> >>>>> +                                  | 1ULL << VIRTIO_NET_F_CSUM
> >>>>> +                                  | 1ULL << VIRTIO_RING_F_INDIRECT_DESC);
> >>>>>          ovs_thread_create("vhost_thread", start_vhost_loop, NULL);
> >>>>>
> >>>>>          ovsthread_once_done(&once);
> >>>>>
> >>>> Any reason you disable indirect descs?
> >>>> Did you measure performance degradation with it?
> >>>
> >>> I measured a slight decrease ~-6% for the use case I tested.
> >> Ok, could you provide more details about this use-case?
> >> I'm interested to know which cases (don't) benefit of this feature.
> >
> > Single flow 64B packets unidirectional loopback test (dpdk -> vhost, vhost -
> > dpdk) with testpmd forwarding between vhost ports in the VM. Mergeable
> buffers disabled.
> Ok, thanks.
> Is testpmd in IO mode?

Correct, default mode.

> 
> Anyway, this is good to see that DPDK version upgrade has no blockers.
> With indirect disabled, how performance compare to v16.07?

I'm taking rough measurements but 16.07 performs slightly better for this case - ~+2% on indirect disabled, ~+8% on indirect enabled.

Thanks,
Ciara

> 
> >
> >>
> >>> Either way, if it is to be enabled it should be enabled in a separate patch.
> >> Well, I'm not sure.
> >> I think this is not a good way to disable features, because once
> >> OVS packaged, we cannot enable it anymore.
> >
> > My reasoning not to enable it in this patch (other than the degradation) is
> because the purpose of this patch is the DPDK upgrade. We could have a
> second patch "enable indirect descriptors for vHost" for example, as
> suggested by Ilya previously:
> > https://mail.openvswitch.org/pipermail/ovs-dev/2016-
> October/324586.html
> Ok.
> >
> >>
> >> It should be the role of the management tool to disable it by default,
> >> by configuring the Qemu params properly if needed (indirect_desc=off).
> >> Doing this, it could still be enabled without rebuilding the package if
> >> it benefits some use-cases.
> >>
> >> I would think this is the same for the other features that are disabled,
> >> as soon as they don't break functionality. If other features break
> >> functionality, it should be reported so that it gets fixed.
> >>
> >> Moreover, we are looking at cross-version migration for vhost currently,
> >> and rte_vhost_feature_disable is problematic, because it prevent us to
> >> export virtio features supported by a given DPDK version reliably at
> >> build time.
> >
> > I see your reasoning too. Would like to hear input from other folks in the
> community before I respin.
> Yes, sure.
> And cross-version migration is currently being discussed.
> 
> Thanks,
> Maxime
> >
> > Thanks,
> > Ciara
> >
> >>
> >> Cheers,
> >> Maxime


More information about the dev mailing list