[ovs-dev] [PATCH 4/5] lib/cmap: Add more hmap-like functionality.

Jarno Rajahalme jrajahalme at nicira.com
Wed May 28 23:48:41 UTC 2014


On May 28, 2014, at 4:14 PM, Jarno Rajahalme <jrajahalme at nicira.com> wrote:

> 
> On May 23, 2014, at 10:23 AM, Ben Pfaff <blp at nicira.com> wrote:
> 
>> On Thu, May 22, 2014 at 05:37:41PM -0700, Jarno Rajahalme wrote:
>>> Add cmap_replace() and cmap_first(), as well as CMAP_FOR_EACH_SAFE and
>>> CMAP_FOR_EACH_CONTINUE to make porting existing hmap using code a bit
>>> easier.
>>> 
>>> CMAP_FOR_EACH_SAFE is useful in RCU postponed destructors, when it is
>>> known that additional postponing is not needed.
>>> 
>>> Signed-off-by: Jarno Rajahalme <jrajahalme at nicira.com>
>> 
>> I am not sure that ovsrcu_init() may be used in cmap_replace__() here.
>> I believe that we need to make sure that the store to replacement->next
>> becomes visible before the store to b->nodes[slot].  I am not sure that
>> a single release fence is sufficient, because I believe that the write
>> to replacement->next could still get reordered after the write to
>> b->nodes[slot].  After all, both are on the same side of the release
>> fence:
>> 
>>   /* The pointer to 'node' is changed to point to 'replacement',
>>    * which is the next node if no replacement node is given. */
>>   if (!replacement) {
>>       replacement = cmap_node_next_protected(node);
>>   } else {
>>       /* 'replacement' takes the position of 'node' in the list. */
>>       ovsrcu_init(&replacement->next, cmap_node_next_protected(node));
>>   }
>> 
>>   if (b->nodes[slot] == node) {
>>       b->nodes[slot] = replacement;
>>       atomic_thread_fence(memory_order_release);
>> 
> 
> It seems to me that moving the fence up one line would solve the problem, no?

I just re-realized that this will go way with the patch 5/5 anyway. Do you want me to merge patches 4 and 5, instead?

  Jarno
-------------- next part --------------
An HTML attachment was scrubbed...
URL: <http://mail.openvswitch.org/pipermail/ovs-dev/attachments/20140528/564cce98/attachment-0005.html>


More information about the dev mailing list