[ovs-dev] Invitation: OVS-DPDK bi-weekly meeting @ Every 2 weeks from 5pm to 6pm on Thursday from Thu Dec 15 to Thu Jun 29, 2017 (GMT) (dev at openvswitch.org)

Gray, Mark D mark.d.gray at intel.com
Mon Jan 16 11:54:20 UTC 2017


Hi John

> -----Original Message-----
> From: ovs-dev-bounces at openvswitch.org [mailto:ovs-dev-
> bounces at openvswitch.org] On Behalf Of John Fastabend
> Sent: Wednesday, January 4, 2017 7:13 AM
> To: ktraynor at redhat.com; Giller, Robin <robin.giller at intel.com>;
> dev at openvswitch.org; Thomas Graf <tgraf at suug.ch>; Simon Horman
> <simon.horman at netronome.com>; Justin Pettit <jpettit at vmware.com>
> Subject: Re: [ovs-dev] Invitation: OVS-DPDK bi-weekly meeting @ Every 2
> weeks from 5pm to 6pm on Thursday from Thu Dec 15 to Thu Jun 29, 2017
> (GMT) (dev at openvswitch.org)
> 
> On 16-12-16 10:04 AM, Kevin Traynor wrote:
> > Thanks for the meeting notes Robin, I've edited a bit.
> >
> 
> Hi,
> 
> Delayed significantly but I can provide additional details related to _my_
> opinions around connection tracking and would be interested in feedback.
> (warning it might be a bit off-topic for traditional dev mailing list, but seems
> more in the spirit to respond on list in the open vs in private round of emails.)
> Also I'm not an expert on the exact bits being used in the different variants
> of conntrack that OVS may call into DPDK/Linux/whatever HyperV uses or
> plans to use.
> 
> +CC (experts) Thomas, Simon, and Justin
> 
> > 15 DEC 2016
> > ATTENDEES
> > Kevin Traynor, Robin Giller, Rashid Khan, Mark Gray, Michael Lilja,
> > Bhanuprakash Bodireddy, Rony Efraim, Sugesh Chandran, Aaron Conole,
> > Thomas Monjalon, Daniele diProietto, Vinod Chegu
> >
> 
> [...]
> 
> >
> > * Optimise SW Conntrack perf (Mark Gray) --Bhanu and Antonio will
> > start looking at this start of 2017 HW acceleration:
> >
> > * Share use cases where conntrack is not needed (John Fastabend)
> 
> First get some basics out. I tend to break connection tracking into at least two
> broad categories.
> 
>   (a) related flow identification
>   (b) bucket list of protocol verification done in Linux conntrack
>         TCP window enforcement
> 
> The challenge on the hardware side is both models require some state that is
> kept in software in the conntrack logic.
> 
> To identify "related" flows though we can kick packets that have a miss in the
> hardware up to the software logic which can instantiate related flow rules in
> software dpif, hardware dpif or both. Once the related flow is established all
> other packets will have a match in hardware and be forwarded correctly. I
> believe this breaks the current model where every packet in software is sent
> to the connection tracking engine. But if we disregard (b) for a moment I do
> not see the need for every packet to be handled by this logic even in the
> software case. Established "related" flows _should_ be able to bypass
> stateful logic. Could this be an optimization Mark, et. al. look at assuming my
> assumption of every packet hitting the conntrack logic is correct?

[Gray, Mark D] 
I guess this is similar to the "learn" action approach but within the dpif. It could
populate the emc for certain types of related flows. This should be  generally
applicable to software and hardware. I'm not sure how much accounting
the conntrack module does but it could complicate things if you are bypassing
conntrack for certain flows. Certainly, flow expirations would need to be
synchronized and there may be other corner cases.

> 
> Now for (b) and possibly more controversial how valuable is the protocol
> validation provided here? I assume protocol violations should be handled by
> the terminating protocol stack e.g. VM, container, etc. OVS has been happily
> deployed without this so do we know of security issues here that are not
> easily fixed by patching the protocol stack? I googled around for some
> concrete examples or potential use cases but all I found was some RFC
> conformance. Is this to protect against malicious VM sending subtle and non-
> coformant TCP traffic? Other than reading the code I found it hard to
> decipher exactly what protocol validation was being done in Linux conntrack
> implementation. Is there some known documentation?
> 
> > --Would like to get list of use cases not requiring conntrack --Eg
> > firewall in VM, conntrack is done in VM, GiLAN, Mobile edge compute

[Gray, Mark D] 
I asked a few people here about that and it seems most of them required
security groups but I'm not sure if there was a good understanding of what
it provides or what the alternatives are. I think I would need to have a more
in depth conversation about that. It would also seem to me that in many
NFV use cases, you would generally have more trust in the networks and the
VNFs or could provide a service FW as you indicated below. I would love to 
hear contrary input on this?

> >
> 
> The exact use cases I was considering are when we "trust" the TCP protocol
> so (b) is not needed. Either because it is generated by local stacks or has
> been established via some TCP proxy in which case the TCP proxy should
> provide any required validation. I've made the assumption that (a) can be
> handled by setup logic.
> 
> Alternatively the function can be provided via some form of service chaining
> where a function does the role per the above "firewall in VM"
> example.
> 
> Thanks!
> John
> (john.r.fastabend at intel.com)
> _______________________________________________
> dev mailing list
> dev at openvswitch.org
> https://mail.openvswitch.org/mailman/listinfo/ovs-dev


More information about the dev mailing list