[ovs-dev] [PATCH v3 5/5] dpif-netdev: Catch reloads faster.
David Marchand
david.marchand at redhat.com
Thu Jul 4 11:59:38 UTC 2019
Looking at the reload flag only every 1024 loops can be a long time
under load, since we might be handling 32 packets per rxq, per iteration,
which means up to poll_cnt * 32 * 1024 packets.
Look at the flag every loop, no major performance impact seen.
Signed-off-by: David Marchand <david.marchand at redhat.com>
Acked-by: Eelco Chaudron <echaudro at redhat.com>
Acked-by: Ian Stokes <ian.stokes at intel.com>
---
Changelog since v2:
- rebased on master
Changelog since v1:
- added acks, no change
Changelog since RFC v2:
- fixed commitlog on the number of packets
---
lib/dpif-netdev.c | 12 ++++++------
1 file changed, 6 insertions(+), 6 deletions(-)
diff --git a/lib/dpif-netdev.c b/lib/dpif-netdev.c
index 6e22e02..b74a4df 100644
--- a/lib/dpif-netdev.c
+++ b/lib/dpif-netdev.c
@@ -5478,7 +5478,6 @@ reload:
poll_block();
}
}
- lc = UINT_MAX;
}
pmd->intrvl_tsc_prev = 0;
@@ -5527,11 +5526,6 @@ reload:
emc_cache_slow_sweep(&((pmd->flow_cache).emc_cache));
}
- atomic_read_explicit(&pmd->reload, &reload, memory_order_acquire);
- if (reload) {
- break;
- }
-
for (i = 0; i < poll_cnt; i++) {
uint64_t current_seq =
netdev_get_change_seq(poll_list[i].rxq->port->netdev);
@@ -5542,6 +5536,12 @@ reload:
}
}
}
+
+ atomic_read_explicit(&pmd->reload, &reload, memory_order_acquire);
+ if (OVS_UNLIKELY(reload)) {
+ break;
+ }
+
pmd_perf_end_iteration(s, rx_packets, tx_packets,
pmd_perf_metrics_enabled(pmd));
}
--
1.8.3.1
More information about the dev
mailing list