[ovs-dev] [PATCH V3 2/2] bridge: Remove the 'Instant' stats.

Alex Wang alexw at nicira.com
Thu Apr 17 20:42:32 UTC 2014


>
>  @@ -2481,8 +2436,17 @@ bridge_wait(void)
>>>          poll_timer_wait_until(iface_stats_timer);
>>>      }
>>>
>>> +    /* If the status database transaction is "TXN_INCOMPLETE" in this
>>> run,
>>> +     * register a timeout in "STATUS_CHECK_AGAIN_MSEC".  Else, wait on
>>> the
>>> +     * global connectivity sequence number.  Note, this also helps batch
>>> +     * multiple status changes into one transaction. */
>>> +    if (status_txn) {
>>> +        poll_timer_wait_until(time_msec() + STATUS_CHECK_AGAIN_MSEC);
>>> +    } else {
>>> +        seq_wait(connectivity_seq_get(), connectivity_seqno);
>>> +    }
>>>
>>
>> So, this is like a backoff? If there's so much database update happening
>> that we can't immediately transact (I equate this with TXN_INCOMPLETE),
>> then don't try again for another 500ms?
>> Otherwise, respond immediately to any changes to connectivity?
>>
>
>
> Yes, this is like a backoff.   we are doing the same backoff in current
> master code using the 'INSTANT_INTERVAL_MSEC'.
>
> Upon further thinking, I decide to go back to using the 100ms backoff
> interval.
>
> The reason is that
>
> 1. the main thread will always be waken up immediately after connectivity
> change (since ofproto_wait() always wait on it).  So,  when connectivity seq
>     is changing fast, the backoff will be of no use and the main thread
> will keep checking the 'TXN_INCOMPLETE' transaction.
> 2. backoff is only useful when the previous update status is
> 'TXN_INCOMPLETE' and now we only have infrequent connectivity changes.  we
> need
>     to wakeup periodically and check for completion of previous update.
>
> So, the backoff interval should not make a big difference.
>
> My previous experiment showed that there is slight reduction of backlog
> when the interval is 500ms.  (10K tunnel, flap the forwarding flag every
> 0.3 sec)
> I'll experiment again to confirm it.
>
> For my next version, I'll only include the refactoring changes.
>


Hey Joe,

with more experiment (10K bfd sessions, flapping forwarding flap of all
sessions every 0.3 sec) I have following observation:
1. Slight backlog can be observed via 'top' out memory growth on both
master and my patch.
2. Master's backlog is growing faster, since there are some netdev related
entries updated in the instant_stats_run().
3. There is no observable change in backlog growing rate between 100ms
backoff and 500ms backoff, on my patch.

So, I dropped the change of backoff interval and kept using 100ms in my
recent V4 patch (http://openvswitch.org/pipermail/dev/2014-April/039054.html
)

For the slight memory backlog, I think my patch makes it hard to happen
(need to flap all 10K tunnels) than on master (just flap one tunnel).  So,
I'll leave it so is for now.
-------------- next part --------------
An HTML attachment was scrubbed...
URL: <http://mail.openvswitch.org/pipermail/ovs-dev/attachments/20140417/ee735800/attachment-0005.html>


More information about the dev mailing list