[ovs-dev] [PATCH 2/2] dpif-netdev: Introduce netdev array cache

Ferriter, Cian cian.ferriter at intel.com
Thu Jul 8 16:43:30 UTC 2021


Hi Gaetan, Eli and all,

Thanks for the patch and the info on how it affects performance in your case. I just wanted to post the performance we are seeing.

I've posted the numbers inline. Please note, I'll be away on leave till Tuesday.
Thanks,
Cian

> -----Original Message-----
> From: Gaëtan Rivet <grive at u256.net>
> Sent: Wednesday 7 July 2021 17:36
> To: Eli Britstein <elibr at nvidia.com>; <dev at openvswitch.org> <dev at openvswitch.org>; Van Haaren, Harry
> <harry.van.haaren at intel.com>; Ferriter, Cian <cian.ferriter at intel.com>
> Cc: Majd Dibbiny <majd at nvidia.com>; Ilya Maximets <i.maximets at ovn.org>
> Subject: Re: [ovs-dev] [PATCH 2/2] dpif-netdev: Introduce netdev array cache
> 
> On Wed, Jul 7, 2021, at 17:05, Eli Britstein wrote:
> > Port numbers are usually small. Maintain an array of netdev handles indexed
> > by port numbers. It accelerates looking up for them for
> > netdev_hw_miss_packet_recover().
> >
> > Reported-by: Cian Ferriter <cian.ferriter at intel.com>
> > Signed-off-by: Eli Britstein <elibr at nvidia.com>
> > Reviewed-by: Gaetan Rivet <gaetanr at nvidia.com>
> > ---

<snipped patch contents>

> > _______________________________________________
> > dev mailing list
> > dev at openvswitch.org
> > https://mail.openvswitch.org/mailman/listinfo/ovs-dev
> >
> 
> Hello,
> 
> I tested the performance impact of this patch with a partial offload setup.
> As reported by pmd-stats-show, in average cycles per packet:
> 
> Before vxlan-decap: 525 c/p
> After vxlan-decap: 542 c/p
> After this fix: 530 c/p
> 
> Without those fixes, vxlan-decap has a 3.2% negative impact on cycles,
> with the fixes, the impact is reduced to 0.95%.
> 
> As I had to force partial offloads for our hardware, it would be better
> with an outside confirmation on a proper setup.
> 
> Kind regards,
> --
> Gaetan Rivet

I'm showing the performance relative to what we measured on OVS master directly before the VXLAN HWOL changes went in. All of the below results are using the scalar DPIF and partial HWOL.

Link to "Fixup patches": http://patchwork.ozlabs.org/project/openvswitch/list/?series=252356

Master before VXLAN HWOL changes (f0e4a73)
1.000x

Latest master after VXLAN HWOL changes (b780911)
0.918x (-8.2%)

After fixup patches on OVS ML are applied (with ALLOW_EXPERIMENTAL_API=off)
0.973x (-2.7%)

After fixup patches on OVS ML are applied and after ALLOW_EXPERIMENTAL_API is removed.
0.938x (-6.2%)

I ran the last set of results by applying the below diff. I did this because I'm assuming the plan is to remove the ALLOW_EXPERIMENTAL_API '#ifdef's at some point?
Diff:
diff --git a/lib/dpif-netdev.c b/lib/dpif-netdev.c
index accb23a1a..0e29c609f 100644
--- a/lib/dpif-netdev.c
+++ b/lib/dpif-netdev.c
@@ -7132,7 +7132,6 @@ dp_netdev_hw_flow(const struct dp_netdev_pmd_thread *pmd,
     struct netdev *netdev OVS_UNUSED;
     uint32_t mark;

-#ifdef ALLOW_EXPERIMENTAL_API /* Packet restoration API required. */
     /* Restore the packet if HW processing was terminated before completion. */
     netdev = pmd_netdev_cache_lookup(pmd, port_no);
     if (OVS_LIKELY(netdev)) {
@@ -7143,7 +7142,6 @@ dp_netdev_hw_flow(const struct dp_netdev_pmd_thread *pmd,
             return -1;
         }
     }
-#endif

     /* If no mark, no flow to find. */
     if (!dp_packet_has_flow_mark(packet, &mark)) {


More information about the dev mailing list