[ovs-dev] [PATCH net-next 0/9] optimize openvswitch flow looking up

Tonghao Zhang xiangxia.m.yue at gmail.com
Tue Oct 8 01:41:09 UTC 2019


On Fri, Oct 4, 2019 at 1:09 AM William Tu <u9012063 at gmail.com> wrote:
>
> Hi Tonghao,
>
> Thanks for the patch.
>
> > On 29 Sep 2019, at 19:09, xiangxia.m.yue at gmail.com wrote:
> >
> > > From: Tonghao Zhang <xiangxia.m.yue at gmail.com>
> > >
> > > This series patch optimize openvswitch.
> > >
> > > Patch 1, 2, 4: Port Pravin B Shelar patches to
> > > linux upstream with little changes.
> > >
>
> I thought the idea of adding another cache before the flow mask
> was rejected before, due to all the potential issue of caches, ex:
> cache is exploitable, and performance still suffers when your cache
> is full. See David's slides below:
> [1] http://vger.kernel.org/~davem/columbia2012.pdf
>
> Do you have a rough number about how many flows this flow mask
> cache can handle?
Now we can cache 256 flows on a CPU, so if there are 40 CPUs, 256*10
flows will be cached.
the value of flow-mask is defined using MC_HASH_ENTRIES macro define.
We can change the value
according to different use case and CPU L1d cache.

> > > Patch 5, 6, 7: Optimize the flow looking up and
> > > simplify the flow hash.
>
> I think this is great.
> I wonder what's the performance improvement when flow mask
> cache is full?
I will test the case, I think this feature should work well with RSS
and irq affinity.
> Thanks
> William
>
> > >
> > > Patch 8: is a bugfix.
> > >
> > > The performance test is on Intel Xeon E5-2630 v4.
> > > The test topology is show as below:
> > >
> > > +-----------------------------------+
> > > |   +---------------------------+   |
> > > |   | eth0   ovs-switch    eth1 |   | Host0
> > > |   +---------------------------+   |
> > > +-----------------------------------+
> > >       ^                       |
> > >       |                       |
> > >       |                       |
> > >       |                       |
> > >       |                       v
> > > +-----+----+             +----+-----+
> > > | netperf  | Host1       | netserver| Host2
> > > +----------+             +----------+
> > >
> > > We use netperf send the 64B frame, and insert 255+ flow-mask:
> > > $ ovs-dpctl add-flow ovs-switch
> > > "in_port(1),eth(dst=00:01:00:00:00:00/ff:ff:ff:ff:ff:01),eth_type(0x0800),ipv4(frag=no)"
> > > 2
> > > ...
> > > $ ovs-dpctl add-flow ovs-switch
> > > "in_port(1),eth(dst=00:ff:00:00:00:00/ff:ff:ff:ff:ff:ff),eth_type(0x0800),ipv4(frag=no)"
> > > 2
> > > $ netperf -t UDP_STREAM -H 2.2.2.200 -l 40 -- -m 18
> > >
> > > * Without series patch, throughput 8.28Mbps
> > > * With series patch, throughput 46.05Mbps
> > >
> > > Tonghao Zhang (9):
> > >   net: openvswitch: add flow-mask cache for performance
> > >   net: openvswitch: convert mask list in mask array
> > >   net: openvswitch: shrink the mask array if necessary
> > >   net: openvswitch: optimize flow mask cache hash collision
> > >   net: openvswitch: optimize flow-mask looking up
> > >   net: openvswitch: simplify the flow_hash
> > >   net: openvswitch: add likely in flow_lookup
> > >   net: openvswitch: fix possible memleak on destroy flow table
> > >   net: openvswitch: simplify the ovs_dp_cmd_new
> > >
> > >  net/openvswitch/datapath.c   |  63 +++++----
> > >  net/openvswitch/flow.h       |   1 -
> > >  net/openvswitch/flow_table.c | 318
> > > +++++++++++++++++++++++++++++++++++++------
> > >  net/openvswitch/flow_table.h |  19 ++-
> > >  4 files changed, 330 insertions(+), 71 deletions(-)
> > >
> > > --
> > > 1.8.3.1
> > _______________________________________________
> > dev mailing list
> > dev at openvswitch.org
> > https://mail.openvswitch.org/mailman/listinfo/ovs-dev


More information about the dev mailing list