[ovs-dev] [PATCH] stopwatch: Explicitly ignore write() return value.

Aaron Conole aconole at redhat.com
Fri Apr 6 22:37:16 UTC 2018

Ben Pfaff <blp at ovn.org> writes:

> On Fri, Apr 06, 2018 at 04:13:34PM -0500, Mark Michelson wrote:
>> On 04/06/2018 02:50 PM, Ben Pfaff wrote:
>> >On Fri, Apr 06, 2018 at 02:27:16PM -0500, Mark Michelson wrote:
>> >>On 04/06/2018 02:18 PM, Mark Michelson wrote:
>> >>>On 04/06/2018 12:28 PM, Ben Pfaff wrote:
>> >>>>On Fri, Apr 06, 2018 at 10:16:24AM -0700, Justin Pettit wrote:
>> >>>>>In some environments, builds would fail with the following error:
>> >>>>>
>> >>>>>    lib/stopwatch.c: In function ‘stopwatch_exit’:
>> >>>>>    lib/stopwatch.c:448:5: error: ignoring return value of ‘write’,
>> >>>>>declared
>> >>>>>    with attribute warn_unused_result [-Werror=unused-result]
>> >>>>>        write(stopwatch_pipe[1], &pkt, sizeof pkt);
>> >>>>>
>> >>>>>This patch explicitly ignores the return value of write().
>> >>>>>
>> >>>>>This also fixes some minor coding style issues.
>> >>>>>
>> >>>>>Signed-off-by: Justin Pettit <jpettit at ovn.org>
>> >>>>
>> >>>>Obviously I didn't review this as carefully as I should have.  That's on
>> >>>>me.  I apologize.
>> >>>>
>> >>>>I believe that we already have a proper abstraction for what's going on
>> >>>>here: a latch.  The latch implementation also hides differences between
>> >>>>Unix and Windows (which this inline implementation isn't doing).  Would
>> >>>>you mind making this use latch.h instead of raw pipes?
>> >>>>
>> >>>>Would you mind breaking the style issues into a separate commit?
>> >>>>
>> >>>>Thanks,
>> >>>>
>> >>>>Ben.
>> >>>
>> >>>Hi Ben,
>> >>>
>> >>>Thanks for pointing out the latch library. I'll post the patch with this
>> >>>change.
>> >>>
>> >>>Mark!
>> >>
>> >>I just had a look, and unfortunately, the current code doesn't translate
>> >>directly to a latch. The reason is that you can't control the data that is
>> >>sent to and read from the latch. It's essentially a boolean of "set" or "not
>> >>set". In the current stopwatch implementation, data packets are written to
>> >>the pipe and that data is then read by another thread. The actual data is
>> >>important in this case.
>> >>
>> >>You're 100% right that this needs to handle the Windows case. One idea I
>> >>have is to create a cross-platform "pipe" library. Then the latch library
>> >>and the stopwatch library can be built on top of the pipe library. What do
>> >>you think about that?
>> >
>> >I see that I still didn't read the code carefully.
>> >
>> >Let's get the code compiling with Justin's patches or an equivalent, and
>> >then improve it from there.
>> >
>> >Possibly a pipe isn't needed.  One possibility for passing data between
>> >threads is to use a guarded_list from guarded-list.h along with a seq
>> >from seq.h.
>> >
>> This jogged my memory a bit. In version 2 of the stopwatch API change, I
>> used an expanding vector protected by a mutex. This attracted a finding from
>> Aaron Conole.
>> The concerns were
>> 1) Waiting on a global lock can cause the actual real work of the threads
>> using stopwatches to be delayed.
>> 2) Waiting on the lock when ending a sample could lead to a skewed time
>> being reported.

Problem 2 was my biggest concern.  I didn't want skew in the samples.
Problem 1 is really 'usage scope' issue - if we would put these probes
in the fastpath, we'll certainly introduce performance latency & jitter.

>> Problem 2 is no longer an issue since the stopwatch API now requires the
>> caller to supply the ending timestamp when stopping the stopwatch. However,
>> problem 1 would still be present if I switched to a guarded list and seq,
>> due to the added locking. I switched to the pipe implementation because it
>> eliminated (userspace) locking.
>> So the question here is: what is the best way to go here? Go back to
>> locking, or use a pipe and be sure to support Windows?
> Writing to a pipe can also block of course.


> Expanding a vector is relatively expensive since it does a memory
> allocation and copy (in the worst case).  None of the guarded_list cases
> is that expensive, they're just a few pointer operations.  So it might
> be better.

I took a look at guarded list.  I think either will work, and both have
tradeoffs.  The average-cost is probably not worth worrying about, but
the worst-case cost could be (if contention happens a lot).

I guess pipes might be less prone to worst-case stalls (depending on how
often the buffer is cleared), but that's a guess - I have no experience
or data to back that up.

> If it's a real concern we could invent a cmpxchg based singly linked
> list I guess.

I think you are referring to something like a lock-free list?

If so, agreed.  This is the probably most 'consistent' performance (if
we do think that's a concern).  Likely the message passing fifo doesn't
need to even be a linked list (a "lock-free" ring-buffer might a little
easier to implement in a provable way).

There's lots of ways to skin this cat.  I'd probably be more apt to
support either the latch/seq approach (because it mirrors the existing
implementation), and if we find it isn't good enough - improve it.

Anyway, just my $.02

More information about the dev mailing list