[ovs-dev] [PATCH v2 1/5] ovs-atomic: Use explicit memory order for ovs_refcount.

Jarno Rajahalme jrajahalme at nicira.com
Fri Jul 4 14:21:15 UTC 2014


Use explicit variants of atomic operations for the ovs_refcount to
avoid the overhead of the default memory_order_seq_cst.

Adding a reference requires no memory ordering, as the calling thread
is already assumed to have protected access to the object being
reference counted.  Hence, memory_order_relaxed is used for
ovs_refcount_ref().  ovs_refcount_read() does not change the reference
count, so it can also use memory_order_relaxed.

Unreferencing an object needs a release barrier, so that none of the
accesses to the protected object are reordered after the atomic
decrement operation.  Additionally, an explicit acquire barrier is
needed before the object is recycled, to keep the subsequent accesses
to the object's memory from being reordered before the atomic
decrement operation.

This patch follows the memory ordering and argumentation discussed
here:

http://www.chaoticmind.net/~hcb/projects/boost.atomic/doc/atomic/usage_examples.html

Signed-off-by: Jarno Rajahalme <jrajahalme at nicira.com>
---
v2: Use release and acquire barriers in ovs_refcount_unref() as discussed
    in the reference.

 lib/ovs-atomic.h |   30 ++++++++++++++++++++++++------
 1 file changed, 24 insertions(+), 6 deletions(-)

diff --git a/lib/ovs-atomic.h b/lib/ovs-atomic.h
index 2452846..c1d9fcf 100644
--- a/lib/ovs-atomic.h
+++ b/lib/ovs-atomic.h
@@ -314,13 +314,17 @@ ovs_refcount_init(struct ovs_refcount *refcount)
     atomic_init(&refcount->count, 1);
 }
 
-/* Increments 'refcount'. */
+/* Increments 'refcount'.
+ *
+ * Does not provide a memory barrier, as the calling thread must have
+ * protected access to the object already. */
 static inline void
 ovs_refcount_ref(struct ovs_refcount *refcount)
 {
     unsigned int old_refcount;
 
-    atomic_add(&refcount->count, 1, &old_refcount);
+    atomic_add_explicit(&refcount->count, 1, &old_refcount,
+                        memory_order_relaxed);
     ovs_assert(old_refcount > 0);
 }
 
@@ -331,18 +335,32 @@ ovs_refcount_ref(struct ovs_refcount *refcount)
  *     // ...uninitialize object...
  *     free(object);
  * }
- */
+ *
+ * Provides a release barrier making the preceding loads and stores to not be
+ * reordered after the unref. */
 static inline unsigned int
 ovs_refcount_unref(struct ovs_refcount *refcount)
 {
     unsigned int old_refcount;
 
-    atomic_sub(&refcount->count, 1, &old_refcount);
+    atomic_sub_explicit(&refcount->count, 1, &old_refcount,
+                        memory_order_release);
     ovs_assert(old_refcount > 0);
+    if (old_refcount == 1) {
+        /* 'memory_order_release' above means that there are no (reordered)
+         * accesses to the protected object from any other thread at this
+         * point.
+         * An acquire barrier is needed to keep all subsequent access to the
+         * object's memory from being reordered before the atomic operation
+         * above. */
+        atomic_thread_fence(memory_order_acquire);
+    }
     return old_refcount;
 }
 
-/* Reads and returns 'ref_count_''s current reference count.
+/* Reads and returns 'refcount_''s current reference count.
+ *
+ * Does not provide a memory barrier.
  *
  * Rarely useful. */
 static inline unsigned int
@@ -352,7 +370,7 @@ ovs_refcount_read(const struct ovs_refcount *refcount_)
         = CONST_CAST(struct ovs_refcount *, refcount_);
     unsigned int count;
 
-    atomic_read(&refcount->count, &count);
+    atomic_read_explicit(&refcount->count, &count, memory_order_relaxed);
     return count;
 }
 
-- 
1.7.10.4




More information about the dev mailing list