[ovs-dev] [PATCHv4 7/7] ofproto-dpif: Remove the flow_dumper thread.

Ben Pfaff blp at nicira.com
Thu Feb 27 23:23:27 UTC 2014


On Thu, Feb 27, 2014 at 02:13:11PM -0800, Joe Stringer wrote:
> From: Ethan Jackson <ethan at nicira.com>
> 
> Previously, we had a separate flow_dumper thread that fetched flows from
> the datapath to distribute to revalidator threads. This patch takes the
> logic for dumping and pushes it into the revalidator threads, resulting
> in simpler code with similar performance to the current code.
> 
> One thread, the "leader", is responsible for beginning and ending each
> flow dump, maintaining the flow_limit, and checking whether the
> revalidator threads need to exit. All revalidator threads dump,
> revalidate, delete datapath flows and garbage collect ukeys.
> 
> Co-authored-by: Joe Stringer <joestringer at nicira.com>
> Signed-off-by: Joe Stringer <joestringer at nicira.com>

I would appreciate xpthread_*() wrappers for the barrier functions.  I
know that they shouldn't normally fail, but error checking has found
real bugs for other types of pthread objects and it could find them
for barriers too.

Datapath flow dumps can sometimes repeat or omit flows, but
revalidate() appears to create a new ukey if it doesn't find one
without making sure that a duplicate wasn't added in the meantime.

That said, I guess that the common case is that a given ukey is being
accessed by only a single thread at a time.  Is that true?  If so, did
you consider locking it once in revalidate() as soon as we find it,
and then releasing it when we go on to the next flow?  (I guess that
we only need to deal with a single ukey once per dump, so we could
possibly even use trylock at the beginning and skip on to the next
flow if some other thread is already working on that one?)

Here are a few minor style bits I suggest folding in.

diff --git a/ofproto/ofproto-dpif-upcall.c b/ofproto/ofproto-dpif-upcall.c
index 231ef8a..1b42aaa 100644
--- a/ofproto/ofproto-dpif-upcall.c
+++ b/ofproto/ofproto-dpif-upcall.c
@@ -1406,10 +1406,8 @@ revalidate(struct revalidator *revalidator)
         long long int used;
         uint32_t hash;
 
-        mark = false;
         hash = hash_bytes(key, key_len, udpif->secret);
-        ukey_revalidator = &udpif->revalidators[hash
-            % udpif->n_revalidators];
+        ukey_revalidator = &udpif->revalidators[hash % udpif->n_revalidators];
         ukey = ukey_lookup(ukey_revalidator, key, key_len, hash);
 
         used = stats->used;
@@ -1442,19 +1440,17 @@ revalidate(struct revalidator *revalidator)
         }
 
         if (!mark) {
-            struct dump_op *op = &ops[n_ops++];
-
-            dump_op_init(op, key, key_len, ukey);
+            dump_op_init(&ops[n_ops++], key, key_len, ukey);
         }
 
-        if (dpif_flow_dump_next_may_destroy_keys(&udpif->dump, state)) {
-            /* Only update "now" immediately before 'buffer' will be updated.
-             * This gives us the current time relative to the time the datapath
-             * will write into 'stats'. */
-            may_destroy = true;
+        may_destroy = dpif_flow_dump_next_may_destroy_keys(&udpif->dump,
+                                                           state);
+
+        /* Only update "now" immediately before 'buffer' will be updated.
+         * This gives us the current time relative to the time the datapath
+         * will write into 'stats'. */
+        if (may_destroy) {
             now = time_msec();
-        } else {
-            may_destroy = false;
         }
 
         /* Only do a dpif_operate when we've hit our maximum batch, or when our
@@ -1483,7 +1479,7 @@ revalidator_sweep__(struct revalidator *revalidator, bool purge)
 
     n_ops = 0;
 
-    /* During garbage collection, this revalidator completely owns it's ukeys
+    /* During garbage collection, this revalidator completely owns its ukeys
      * map, and therefore doesn't need to do any locking. */
     HMAP_FOR_EACH_SAFE (ukey, next, hmap_node, &revalidator->ukeys) {
         if (!purge && ukey->mark) {



More information about the dev mailing list