[ovs-dev] [PATCH RFC] dpif-netdev: ACL+dpcls for Wildcard matching.

Fischetti, Antonio antonio.fischetti at intel.com
Fri May 20 11:07:18 UTC 2016


One question below about data on real use-cases, 
thanks.

> -----Original Message-----
> From: Jarno Rajahalme [mailto:jarno at ovn.org]
> Sent: Thursday, May 19, 2016 7:51 PM
> To: Fischetti, Antonio <antonio.fischetti at intel.com>
> Cc: Jan Scheurich <jan.scheurich at ericsson.com>; Ben Pfaff
> <blp at ovn.org>; dev at openvswitch.org
> Subject: Re: [ovs-dev] [PATCH RFC] dpif-netdev: ACL+dpcls for
> Wildcard matching.
> 
> 
> > On May 19, 2016, at 9:15 AM, Fischetti, Antonio
> <antonio.fischetti at intel.com> wrote:
> >
> > Hi Jan, thanks for your feedback, some replies below.
> >
> > Regards,
> > Antonio
> >
> >> -----Original Message-----
> >> From: Jan Scheurich [mailto:jan.scheurich at ericsson.com]
> >> Sent: Thursday, May 19, 2016 3:55 PM
> >> To: Fischetti, Antonio <antonio.fischetti at intel.com>; Ben Pfaff
> >> <blp at ovn.org>
> >> Cc: dev at openvswitch.org
> >> Subject: RE: [ovs-dev] [PATCH RFC] dpif-netdev: ACL+dpcls for
> >> Wildcard matching.
> >>
> >> Hi,
> >>
> >>> The current ACL implementation is using rules as {ProtocolType,
> >> IPsrc, IPdest,
> >>> PortSrc, PortDest}, so I'm limited to play just with these 5
> >> fields.
> >>>
> >>
> >> From experience with real-world OVS deployments using bonded
> >> interfaces and overlay tunnels (e.g. VXLAN) I would say that the
> vast
> >> majority of dpif megaflows match on packet metadata like in_port,
> >> recirc_id, hashes, tunnel header etc.
> >
> > [Antonio F] In general, ACL tables can collect this type of data,
> they're
> > not limited to the 5-tuple I'm using now.
> >
> >
> >>
> >> Given that, I wonder if an ACL-based cache can be the right tool
> to
> >> accelerate the megaflow lookup, especially also looking at the ACL
> >> reconfiguration times.
> >
> > [Antonio F] I agree, this solution would give no benefit if the
> addition
> > of new flows is 'very' frequent.
> >
> > Do you know in a real scenario how often - more or less - we
> > would typically need to add new flows? I mean, is it something that
> > happens - say a tens of times within an hour? Or 1,000 times per
> minutes?
> >
> 
> In the worst case it can be 1,000s of times per second, e.g., during
> a port scan with the presence of an ACL that matches on L4 ports.
> 

[Antonio F] Do you have some data on table flows, measurements or 
any figures for real use-cases?
That would help a lot to understand what can happen in a real scenario.


> >
> >>
> >> What we do see, however is that there is often a strong
> correlation
> >> between the ingress port and the subset of masks/subtables that
> have
> >> hits. The entire megaflow cache typically decomposes nicely into
> >> partitions that are hit only by packets entering from equivalent
> >> ports (e.g. traffic from Phy -> VM and VM -> Phy)
> >>
> >> Since megaflows are by nature non-overlapping, the search can stop
> at
> >> the first match. Keeping a separate list of subtables per ingress
> >> port, sorted by frequency of hits, should reduce the average
> number
> >> of subtables lookups to a minimum, even if the total number of
> >> subtables gets large.
> >>
> >> Has such an idea been considered?
> >
> > [Antonio F] This approach sounds interesting.
> >
> 
> I had thought of sorting the subtables periodically, but never got to
> do anything about it. Actually, I'd like to see how that performs
> compared to the ACL proposal before deciding what to do.
> 
> dpcls already uses struct pvector to store the subtables. It would be
> rather easy to use the pvector API to assign a 'priority'
> corresponding to the hit count and then sort the subtables
> accordingly and publish the newly ordered pvector for the pmd thread
> to use for lookups. The only extra fast path cost would be the
> incrementing of subtable hit counts.
> 
> Would the fact that each pmd thread has it's own dpcls take care of
> the separation per ingress port? I.e., if performance matters then
> maybe each port has its own pad thread?
> 
>   Jarno




More information about the dev mailing list