[ovs-dev] Status of Open vSwitch with DPDK

Daniele Di Proietto diproiettod at vmware.com
Wed Aug 12 21:34:40 UTC 2015


There has been some discussion lately about the status of the Open vSwitch
port to DPDK.  While part of the code has been tested for quite some time,
I think we can agree that there are a few rough spots that prevent it from
being easily deployed and used.

I was hoping to get some feedback from the community about those rough
spots,
i.e. areas where OVS+DPDK can/needs to improve to become more "production
ready" and user-friendly.

- PMD threads and queues management: the code has shown several bugs and
the
  netdev interfaces don't seem up to the job anymore.

  There's a lot of margin of improvement: we could factor out the code from
  dpif-netdev, add configuration parameters for advanced users, and figure
out
  a way to add unit tests.

  Related to this, the system should be as fast as possible out-of-the-box,
  without requiring too much tuning.

- Userspace tunneling: while the code has been there for quite some time it
  hasn't received the level of testing that the Linux kernel datapath
tunneling
  has.

- Documentation: other than a step by step tutorial,  it cannot be said
that
  DPDK is a first class citizen in the OVS documentation.  Manpages could
be
  improved.

- Vhost: the code has not received the level of testing of the kernel
vhost.
  Another doubt shared by some developers is whether we should keep
  vhost-cuse, given its relatively low ease of use and the overlapping with
  the far more standard vhost-user.

- Interface management and naming: interfaces must be manually removed from
  the kernel drivers.

  We still don't have an easy way to identify them. Ideas are welcome: how
can
  we make this user friendly?  Is there a better solution on the DPDK side?

  How are DPDK interfaces handled by linux distributions? I've heard about
  ongoing work for RHEL and Ubuntu, it would be interesting to coordinate.


- Insight into the system and debuggability: nothing beats tcpdump for the
  kernel datapath.  Can something similar be done for the userspace
datapath?

- Consistency of the tools: some commands are slightly different for the
  userspace/kernel datapath.  Ideally there shouldn't be any difference.

- Packaging: how should the distributions package DPDK and OVS? Should
there
  only be a single build to handle both the kernel and the userspace
datapath,
  eventually dynamically linked to DPDK?

- Benchmarks: we often rely on extremely simple flow tables with single
flow
  traffic to evaluate the effect of a change.  That may be ok during
  development, but OVS with the kernel datapath has been tested in
different
  scenarios with more complicated flow tables and even with hostile traffic
  patterns.

  Efforts in this sense are being made, like the vsperf project, or even
the
  simple ovs-pipeline.py

I would appreciate feedback on the above points, not (only) in terms of
solutions, but in terms of requirements that you feel are important for our
system to be considered ready.

Cheers,

Daniele




More information about the dev mailing list