[ovs-dev] [PATCH v3] datapath: Simplify datapath locking.

Pravin Shelar pshelar at nicira.com
Wed Apr 10 18:40:38 UTC 2013


On Wed, Apr 10, 2013 at 9:01 AM, Jesse Gross <jesse at nicira.com> wrote:

> On Tue, Apr 9, 2013 at 9:04 PM, Pravin Shelar <pshelar at nicira.com> wrote:
> > On Tue, Apr 9, 2013 at 6:41 PM, Jesse Gross <jesse at nicira.com> wrote:
> >> On Tue, Apr 9, 2013 at 3:03 PM, Pravin B Shelar <pshelar at nicira.com>
> >> wrote:
> >> > diff --git a/datapath/datapath.h b/datapath/datapath.h
> >> > index 9bc98fb..58d7169 100644
> >> > --- a/datapath/datapath.h
> >> > +++ b/datapath/datapath.h
> >> > -static inline struct vport *ovs_vport_rtnl_rcu(const struct datapath
> >> > *dp, int port_no)
> >> > +static inline struct vport *ovs_vport_ovsl_rcu(const struct datapath
> >> > *dp, int port_no)
> >> >  {
> >> > -       WARN_ON_ONCE(!rcu_read_lock_held() && !rtnl_is_locked());
> >> > +       WARN_ON_ONCE(!rcu_read_lock_held() && !ovs_is_locked());
> >> >         return ovs_lookup_vport(dp, port_no);
> >> >  }
> >> >
> >> > -static inline struct vport *ovs_vport_rtnl(const struct datapath *dp,
> >> > int port_no)
> >> > +static inline struct vport *ovs_vport_ovsl(const struct datapath *dp,
> >> > int port_no)
> >> >  {
> >> > -       ASSERT_RTNL();
> >> > +       ASSERT_OVSL();
> >>
> >> For these two functions it seems like it might be better to use the
> >> lockdep version of the check rather than unconditionally doing it.
> >>
> > do you mean lockdep_assert_held() ?
>
> Yes, I think we should create an ovsl wrapper around
> lockdep_assert_held() and then use it here.
>
> ok.


> >> > diff --git a/datapath/dp_notify.c b/datapath/dp_notify.c
> >> > index c9eeafe..1e19125 100644
> >> > --- a/datapath/dp_notify.c
> >> > +++ b/datapath/dp_notify.c
> >> I think there is potentially another race condition here: if a
> >> notification comes after we retrieve the value of count but before we
> >> exit the workqueue. The counter will be incremented but another work
> >> instance won't be scheduled since one is already running. Essentially
> >> we're trying to emulate a semaphore here but actually doing that
> >> directly would require introducing a new thread, which I really don't
> >> want to do.
> >>
> > Pending flag is cleared before actual work execution is started. So I
> think
> > this race is not possible.
>
> You're right, that's great. We should also double check that our
> compatibility code for workqueues is sufficient for everything we are
> trying to do.
>
> ok.


> By the way, I think we may be able to make the workqueue compatibility
> code conditional on version again. The comment says that we were
> having problems with waiting on genl mutex when using workqueues but
> we shouldn't have to do that anymore.
>

I guess this softlockup issue will move from genl_lock to ovs_mutex, as
rehash_wq need to take ovs_mutex.  therefore we still need to use compat
workq code.
-------------- next part --------------
An HTML attachment was scrubbed...
URL: <http://mail.openvswitch.org/pipermail/ovs-dev/attachments/20130410/3e0e06e6/attachment-0004.html>


More information about the dev mailing list