[ovs-dev] [RFC Patch] dpif-netdev: dpcls per in_port with sorted subtables

Jan Scheurich jan.scheurich at ericsson.com
Thu Jul 7 08:27:34 UTC 2016


> From: Jarno Rajahalme [mailto:jarno at ovn.org]
> Sent: Monday, 04 July, 2016 14:06
> 
> I like this RFC patch and suggest you send a non-RFC patch after taking care
> of the following:

Thanks for the review. Will implement all requested changes and submit non-RFC PATCH v2 when ready.

> - You should analyze the optimization code path to make sure there are no
> blocking locks being taken, as we have been carefully removing any such
> blocking to eliminate the associated latency spikes. It would be best to make
> a statement about this in the commit message as well.

Everything looks fine except for the final call to pvector_publish(). This internally calls ovsrcu_postpone() for freeing the old impl vector. If I read the postpone function correctly, it may need to flush the cbset to a guarded list of flushed callbacks, i.e. it might block. Do you consider this a blocking lock that requires changes?

I actually have some doubts about thread safety of the dpcls complex. My understanding is that dpcls rules are inserted only by the owning PMD thread itself as result of processing upcalls. But dplcs rules are removed by revalidator threads as part of deletion of e.g. expired dpif flow entries. I may be missing something here, but I don't see how struct dpcls and its constituent parts (struct cmap subtables_map, struct pvector subtables, struct dpcls_subtable's cmap rules) are protected against concurrent modifications by the owning PMD and revalidator threads.

Since we obviously can't have locks in the PMD loop, either all modifications would have to be handled by non-PMD threads synchronized through locks/mutexes, or all modification must be done by the PMD thread itself, which means the dpif_ops should be passed to the PMD thread e.g. through a DPDK ring, for execution.

Regards, Jan


More information about the dev mailing list