[ovs-dev] [bug15171 3/4] timeval: Increase accuracy of cached time 4X, from 100 ms to 25 ms.

Ben Pfaff blp at nicira.com
Wed Mar 6 00:28:21 UTC 2013


With CFM and other tunnel monitoring protocols, having a fairly precise
time is good.  My measurements don't show this change increasing CPU use.
(In fact it appears to repeatably reduce CPU use slightly, from about
22% to about 20% with 1000 CFM instances, although it's not obvious why.)

Signed-off-by: Ben Pfaff <blp at nicira.com>
---
 lib/timeval.h |    2 +-
 1 files changed, 1 insertions(+), 1 deletions(-)

diff --git a/lib/timeval.h b/lib/timeval.h
index d5c12f0..72cf498 100644
--- a/lib/timeval.h
+++ b/lib/timeval.h
@@ -43,7 +43,7 @@ BUILD_ASSERT_DECL(TYPE_IS_SIGNED(time_t));
 /* Interval between updates to the reported time, in ms.  This should not be
  * adjusted much below 10 ms or so with the current implementation, or too
  * much time will be wasted in signal handlers and calls to clock_gettime(). */
-#define TIME_UPDATE_INTERVAL 100
+#define TIME_UPDATE_INTERVAL 25
 
 /* True on systems that support a monotonic clock.  Compared to just getting
  * the value of a variable, clock_gettime() is somewhat expensive, even on
-- 
1.7.2.5




More information about the dev mailing list