[ovs-dev] [PATCH v2 2/5] dpif-netdev: Trigger parallel pmd reloads.
David Marchand
david.marchand at redhat.com
Wed Jun 26 14:19:27 UTC 2019
On Wed, Jun 26, 2019 at 2:43 PM Ilya Maximets <i.maximets at samsung.com>
wrote:
> On 26.06.2019 12:08, David Marchand wrote:
> > pmd reloads are currently serialised in each steps calling
> > reload_affected_pmds.
> > Any pmd processing packets, waiting on a mutex etc... will make other
> > pmd threads wait for a delay that can be undeterministic when syscalls
> > adds up.
> >
> > Switch to a little busy loop on the control thread using the existing
> > per-pmd reload boolean.
> >
> > The memory order on this atomic is rel-acq to have an explicit
> > synchronisation between the pmd threads and the control thread.
> >
> > Signed-off-by: David Marchand <david.marchand at redhat.com>
> > ---
> > Changelog since v1:
> > - removed the introduced reloading_pmds atomic and reuse the existing
> > pmd->reload boolean as a single synchronisation point (Ilya)
> >
> > Changelog since RFC v1:
> > - added memory ordering on 'reloading_pmds' atomic to serve as a
> > synchronisation point between pmd threads and control thread
> >
> > ---
> > lib/dpif-netdev.c | 32 ++++++++++++++++++--------------
> > 1 file changed, 18 insertions(+), 14 deletions(-)
> >
> > diff --git a/lib/dpif-netdev.c b/lib/dpif-netdev.c
> > index b7f8117..b71fcbe 100644
> > --- a/lib/dpif-netdev.c
> > +++ b/lib/dpif-netdev.c
> > @@ -647,9 +647,6 @@ struct dp_netdev_pmd_thread {
> > struct ovs_refcount ref_cnt; /* Every reference must be
> refcount'ed. */
> > struct cmap_node node; /* In 'dp->poll_threads'. */
> >
> > - pthread_cond_t cond; /* For synchronizing pmd thread
> reload. */
> > - struct ovs_mutex cond_mutex; /* Mutex for condition variable. */
> > -
> > /* Per thread exact-match cache. Note, the instance for cpu core
> > * NON_PMD_CORE_ID can be accessed by multiple threads, and thusly
> > * need to be protected by 'non_pmd_mutex'. Every other instance
> > @@ -1754,11 +1751,8 @@ dp_netdev_reload_pmd__(struct
> dp_netdev_pmd_thread *pmd)
> > return;
> > }
> >
> > - ovs_mutex_lock(&pmd->cond_mutex);
> > seq_change(pmd->reload_seq);
> > atomic_store_explicit(&pmd->reload, true, memory_order_release);
> > - ovs_mutex_cond_wait(&pmd->cond, &pmd->cond_mutex);
> > - ovs_mutex_unlock(&pmd->cond_mutex);
> > }
> >
> > static uint32_t
> > @@ -4643,6 +4637,16 @@ rxq_scheduling(struct dp_netdev *dp, bool pinned)
> OVS_REQUIRES(dp->port_mutex)
> > }
> >
> > static void
> > +wait_reloading_pmd(struct dp_netdev_pmd_thread *pmd)
> > +{
> > + bool reload;
> > +
> > + do {
> > + atomic_read_explicit(&pmd->reload, &reload,
> memory_order_acquire);
> > + } while (reload);
> > +}
> > +
> > +static void
> > reload_affected_pmds(struct dp_netdev *dp)
> > {
> > struct dp_netdev_pmd_thread *pmd;
> > @@ -4651,6 +4655,12 @@ reload_affected_pmds(struct dp_netdev *dp)
> > if (pmd->need_reload) {
> > flow_mark_flush(pmd);
> > dp_netdev_reload_pmd__(pmd);
> > + }
> > + }
> > +
> > + CMAP_FOR_EACH (pmd, node, &dp->poll_threads) {
> > + if (pmd->need_reload) {
> > + wait_reloading_pmd(pmd);
> > pmd->need_reload = false;
> > }
> > }
> > @@ -5816,11 +5826,8 @@ dpif_netdev_enable_upcall(struct dpif *dpif)
> > static void
> > dp_netdev_pmd_reload_done(struct dp_netdev_pmd_thread *pmd)
> > {
> > - ovs_mutex_lock(&pmd->cond_mutex);
> > - atomic_store_relaxed(&pmd->reload, false);
> > pmd->last_reload_seq = seq_read(pmd->reload_seq);
> > - xpthread_cond_signal(&pmd->cond);
> > - ovs_mutex_unlock(&pmd->cond_mutex);
> > + atomic_store_explicit(&pmd->reload, false, memory_order_release);
> > }
> >
> > /* Finds and refs the dp_netdev_pmd_thread on core 'core_id'. Returns
> > @@ -5905,8 +5912,6 @@ dp_netdev_configure_pmd(struct
> dp_netdev_pmd_thread *pmd, struct dp_netdev *dp,
> > pmd->reload_seq = seq_create();
> > pmd->last_reload_seq = seq_read(pmd->reload_seq);
> > atomic_init(&pmd->reload, false);
> > - xpthread_cond_init(&pmd->cond, NULL);
> > - ovs_mutex_init(&pmd->cond_mutex);
> > ovs_mutex_init(&pmd->flow_mutex);
> > ovs_mutex_init(&pmd->port_mutex);
> > cmap_init(&pmd->flow_table);
> > @@ -5949,8 +5954,6 @@ dp_netdev_destroy_pmd(struct dp_netdev_pmd_thread
> *pmd)
> > cmap_destroy(&pmd->flow_table);
> > ovs_mutex_destroy(&pmd->flow_mutex);
> > seq_destroy(pmd->reload_seq);
> > - xpthread_cond_destroy(&pmd->cond);
> > - ovs_mutex_destroy(&pmd->cond_mutex);
> > ovs_mutex_destroy(&pmd->port_mutex);
> > free(pmd);
> > }
> > @@ -5971,6 +5974,7 @@ dp_netdev_del_pmd(struct dp_netdev *dp, struct
> dp_netdev_pmd_thread *pmd)
> > } else {
> > atomic_store_relaxed(&pmd->exit, true);
> > dp_netdev_reload_pmd__(pmd);
> > + wait_reloading_pmd(pmd);
>
> Join will wait for the thread exit. We don't need to wait for reload here.
>
Indeed, and then I can move the wait_reloading_pmd() code directly into
reload_affected_pmds().
--
David Marchand
More information about the dev
mailing list