[ovs-dev] [PATCH v2] ovs-atomics: Add atomic support Windows.
Gurucharan Shetty
shettyg at nicira.com
Thu Sep 4 17:50:00 UTC 2014
On Thu, Sep 4, 2014 at 8:42 AM, Jarno Rajahalme <jrajahalme at nicira.com> wrote:
> Small nits below, otherwise looks good to me (even though I’m not versed in MSVC),
>
> Signed-off-by: Jarno Rajahalme <jrajahalme at nicira.com>
>
> On Sep 3, 2014, at 1:08 PM, Gurucharan Shetty <shettyg at nicira.com> wrote:
>> +
>> +/* 64 bit writes are atomic on i586 if 64 bit aligned. */
>> +#define atomic_store64(DST, SRC, ORDER) \
>> + if (((size_t) (DST) & (sizeof *(DST) - 1)) == 0) { \
>> + if (ORDER == memory_order_seq_cst) { \
>> + InterlockedExchange64((int64_t volatile *) (DST), \
>> + (int64_t) (SRC)); \
>> + } else { \
>> + *(DST) = (SRC); \
>> + } \
>> + } else { \
>> + abort(); \
>> + }
>> +
>
> How about this instead:
>
> /* 64 bit writes are atomic on i586 if 64 bit aligned. */
> #define atomic_store64(DST, SRC, ORDER) \
> if (((size_t) (DST) & (sizeof *(DST) - 1)) \
> || ORDER == memory_order_seq_cst) { \
> InterlockedExchange64((int64_t volatile *) (DST), \
> (int64_t) (SRC)); \
> } else { \
> *(DST) = (SRC); \
> } \
>
> However, you should probably keep the version in your patch if you know that the alignment check can be computed at compile-time and/or there is some documentation that MSVC will not align 64-bit units on at 4-byte alignment.
Ben pointed out as a reply that MSVC will not align 64-bit units on
4-byte alignment for x86. (Thanks Ben. I looked for that but hadn't
found that page).
So that means that your suggestion would actually work because it
calls InterlockedExchange64() when the data is not 64 bit aligned. So
I will use your version. (I suppose I understood the implications
correctly?)
>> +/* Arithmetic subtraction calls. */
>> +
>> +#define atomic_sub(RMW, ARG, ORIG) \
>> + atomic_add_explicit(RMW, (0 - ARG), ORIG, memory_order_seq_cst)
>
> ARG should be in parenthesis: -(ARG). For example, if ARG was “x - 5” we should add -x + 5, not -x - 5.
Oops.
>
>> +
>> +#define atomic_sub_explicit(RMW, ARG, ORIG, ORDER) \
>> + atomic_add_explicit(RMW, (0 - ARG), ORIG, ORDER)
>
> Same here.
Thanks. I will make the change.
More information about the dev
mailing list