[ovs-dev] [PATCH] dpif-netdev: Fix unsafe accesses to pmd polling lists.

Ilya Maximets i.maximets at samsung.com
Wed Feb 13 10:40:04 UTC 2019


On 13.02.2019 1:24, Ben Pfaff wrote:
> On Tue, Feb 12, 2019 at 10:40:30AM +0300, Ilya Maximets wrote:
>> On 11.02.2019 22:15, Aaron Conole wrote:
>>> Ilya Maximets <i.maximets at samsung.com> writes:
>>>
>>>> All accesses to 'pmd->poll_list' should be guarded by
>>>> 'pmd->port_mutex'. Additionally fixed inappropriate usage
>>>> of 'HMAP_FOR_EACH_SAFE' (hmap doesn't change in a loop)
>>>> and dropped not needed local variable 'proc_cycles'.
>>>>
>>>> CC: Nitin Katiyar <nitin.katiyar at ericsson.com>
>>>> Fixes: 5bf84282482a ("Adding support for PMD auto load balancing")
>>>> Signed-off-by: Ilya Maximets <i.maximets at samsung.com>
>>>> ---
>>>>  lib/dpif-netdev.c | 14 +++++++++-----
>>>>  1 file changed, 9 insertions(+), 5 deletions(-)
>>>>
>>>> diff --git a/lib/dpif-netdev.c b/lib/dpif-netdev.c
>>>> index 4d5bc576a..914b2bb8b 100644
>>>> --- a/lib/dpif-netdev.c
>>>> +++ b/lib/dpif-netdev.c
>>>> @@ -3776,9 +3776,12 @@ set_pmd_auto_lb(struct dp_netdev *dp)
>>>>              continue;
>>>>          }
>>>>  
>>>> +        ovs_mutex_lock(&pmd->port_mutex);
>>>>          if (hmap_count(&pmd->poll_list) > 1) {
>>>>              multi_rxq = true;
>>>>          }
>>>> +        ovs_mutex_unlock(&pmd->port_mutex);
>>>
>>> What does this mutex provide here?  I ask because as soon as we unlock,
>>> the result is no longer assured, and we've used it to inform the
>>> multi_rxq value.
>>>
>>> Are you afraid that the read is non-atomic so you'll end up with the
>>> multi_rxq condition true, even when it should not be?  That can happen
>>> anyway - as soon as we unlock the count can change.
>>>
>>> Maybe there's a better change to support that, where hmap_count does an
>>> atomic read, or there's a version that can guarantee a full load with
>>> no intervening writes.  But I don't see 
>>>
>>> Maybe the pmd object lifetime goes away... but that wouldn't make sense,
>>> because then the port_mutex itself would be invalid.
>>>
>>> I'm confused what it does to add a lock here for either the safety, or
>>> the logic.  I probably missed something.  Or maybe it's just trying to
>>> be safe in case the hmap implementation would change in the future?  I
>>> think if that's the case, there might be an alternate to implement?
>>
>> The mutex here is mostly for a code consistency. We're marking the 'poll_list'
>> as OVS_GUARDED. This means that we should guard it. Unfortunately, unlike С++,
>> in C we can not use OVS_GUARDED_BY(port_mutex) inside the structure and
>> clang could not track what metex should protect the variable. I agree that we're
>> not really need to protect this. Furthermore, the only writer for 'poll_list'
>> is the main thread and 'set_pmd_auto_lb' is also called only in a main thread.
>> So we don't need to protect any read accesses.
>> However, for the safety, these details should not be taken into account to not
>> provide code snippets that could be copied to the other (less safe) places and
>> produce issues there. Especially because clang can't protect us from this kind
>> of mistakes. Code architecture also could be changed in the future.
>> As a developer I'd like to always assume that any access to hmap is not thread
>> safe regardless of its internal implementation. Just like I'm assuming that
>> cmap is safe for readers.
>> This is affordable since we're not on a hot path.
> 
> Maybe a comment would be helpful.
> 
> /* We don't really need this lock but it suppresses a Clang warning. */

Problem is that it's not true. Clang is not able to track this kind of guarding in C.
So, the only warning I want to suppress is in my head. I'd like to keep the locking
here to not think about safety of each operation with 'poll_list' in the future.
(Maybe someday clang will be smart enough to show warnings for guarded structure
members in C)

Aaron, as I have no real technical reasons for these locks, I could drop this part
of the patch. Do you think I should ?

Best regards, Ilya Maximets.


More information about the dev mailing list