[ovs-dev] [PATCH v2] netdev-dpdk: Add new 'dpdkvhostuserclient' port type

Daniele Di Proietto diproiettod at ovn.org
Mon Sep 19 21:10:47 UTC 2016


Apologies for the delay, applied to master and branch-2.6

Thanks,

Daniele

2016-09-15 6:53 GMT-07:00 Mooney, Sean K <sean.k.mooney at intel.com>:

> Hi I just wanted to follow up on the status of this patch.
>
> Without this patch support for vhost user reconnect will be blocked in
> openstack.
>
> At this point It is now too late to include support for this feature in
> the newton release
> In q4 but I would like to enable it early in the Openstack Ocata cycle if
> possible.
>
> Will this be in ovs 2.6?
> Regard
> sean
>
>
> > -----Original Message-----
> > From: Mooney, Sean K
> > Sent: Saturday, August 20, 2016 12:22 AM
> > To: Loftus, Ciara <ciara.loftus at intel.com>; dev at openvswitch.org
> > Cc: diproiettod at vmware.com; Mooney, Sean K <sean.k.mooney at intel.com>
> > Subject: RE: [PATCH v2] netdev-dpdk: Add new 'dpdkvhostuserclient' port
> > type
> >
> > Hi I have updated my openstack changes
> > https://review.openstack.org/#/c/344997/ (neutron)
> > https://review.openstack.org/#/c/357555/  (os-vif)
> > https://review.openstack.org/#/c/334048/ (nova) to work with this
> > change and tested it with the v1 patch.
> > As far as I can tell the only change in v2 is in the install.dpdk-
> > advanced and Commit message but I can retest with v2 also if desired.
> >
> > Time permitting assuming this change is accepted I will also submit a
> > patch to networking-ovn And networking-odl Next week to complete
> > enabling the feature in each of the Main ovs compatible neutron
> > backends.
> >
> > > -----Original Message-----
> > > From: Loftus, Ciara
> > > Sent: Friday, August 19, 2016 10:23 AM
> > > To: dev at openvswitch.org
> > > Cc: diproiettod at vmware.com; Mooney, Sean K <sean.k.mooney at intel.com>;
> > > Loftus, Ciara <ciara.loftus at intel.com>
> > > Subject: [PATCH v2] netdev-dpdk: Add new 'dpdkvhostuserclient' port
> > > type
> > >
> > > The 'dpdkvhostuser' port type no longer supports both server and
> > > client mode. Instead, 'dpdkvhostuser' ports are always 'server' mode
> > > and 'dpdkvhostuserclient' ports are always 'client' mode.
> > >
> > > Suggested-by: Daniele Di Proietto <diproiettod at vmware.com>
> > > Signed-off-by: Ciara Loftus <ciara.loftus at intel.com>
> > > ---
> > >  INSTALL.DPDK-ADVANCED.md | 102 +++++++++++++++------------
> > >  NEWS                     |   1 +
> > >  lib/netdev-dpdk.c        | 176 ++++++++++++++++++++++++++-----------
> > --
> > > --------
> > >  vswitchd/vswitch.xml     |   8 +--
> > >  4 files changed, 159 insertions(+), 128 deletions(-)
> > >
> > > diff --git a/INSTALL.DPDK-ADVANCED.md b/INSTALL.DPDK-ADVANCED.md
> > index
> > > 857c805..d7b9873 100755
> > > --- a/INSTALL.DPDK-ADVANCED.md
> > > +++ b/INSTALL.DPDK-ADVANCED.md
> > > @@ -461,6 +461,21 @@ For users wanting to do packet forwarding using
> > > kernel stack below are the steps
> > >       ```
> > >
> > >  ## <a name="vhost"></a> 6. Vhost Walkthrough
> > > +
> > > +Two types of vHost User ports are available in OVS:
> > > +
> > > +1. vhost-user (dpdkvhostuser ports)
> > > +
> > > +2. vhost-user-client (dpdkvhostuserclient ports)
> > > +
> > > +vHost User uses a client-server model. The server
> > > +creates/manages/destroys the vHost User sockets, and the client
> > > +connects to the server. Depending on which port type you use,
> > > +dpdkvhostuser or dpdkvhostuserclient, a different configuration of
> > > +the
> > > client-server model is used.
> > > +
> > > +For vhost-user ports, OVS DPDK acts as the server and QEMU the
> > client.
> > > +For vhost-user-client ports, OVS DPDK acts as the client and QEMU
> > the
> > > server.
> > > +
> > >  ### 6.1 vhost-user
> > >
> > >    - Prerequisites:
> > > @@ -570,49 +585,6 @@ For users wanting to do packet forwarding using
> > > kernel stack below are the steps
> > >         where `-L`: Changes the numbers of channels of the specified
> > > network device
> > >         and `combined`: Changes the number of multi-purpose channels.
> > >
> > > -    4. OVS vHost client-mode & vHost reconnect (OPTIONAL)
> > > -
> > > -       By default, OVS DPDK acts as the vHost socket server for
> > > dpdkvhostuser
> > > -       ports and QEMU acts as the vHost client. This means OVS
> > creates
> > > and
> > > -       manages the vHost socket and QEMU is the client which
> > connects
> > > to the
> > > -       vHost server (OVS). In QEMU v2.7 the option is available for
> > > QEMU to act
> > > -       as the vHost server meaning the roles can be reversed and OVS
> > > can become
> > > -       the vHost client. To enable client mode for a given
> > > dpdkvhostuserport,
> > > -       one must specify a valid 'vhost-server-path' like so:
> > > -
> > > -       ```
> > > -       ovs-vsctl set Interface dpdkvhostuser0 options:vhost-server-
> > > path=/path/to/socket
> > > -       ```
> > > -
> > > -       Setting this value automatically switches the port to client
> > > mode (from
> > > -       OVS' perspective). 'vhost-server-path' reflects the full path
> > > of the
> > > -       socket that has been or will be created by QEMU for the given
> > > vHost User
> > > -       port. Once a path is specified, the port will remain in
> > > 'client' mode
> > > -       for the remainder of it's lifetime ie. it cannot be reverted
> > > back to
> > > -       server mode.
> > > -
> > > -       One must append ',server' to the 'chardev' arguments on the
> > > QEMU command
> > > -       line, to instruct QEMU to use vHost server mode for a given
> > > interface,
> > > -       like so:
> > > -
> > > -       ````
> > > -       -chardev socket,id=char0,path=/path/to/socket,server
> > > -       ````
> > > -
> > > -       If the corresponding dpdkvhostuser port has not yet been
> > > configured in
> > > -       OVS with vhost-server-path=/path/to/socket, QEMU will print a
> > > log
> > > -       similar to the following:
> > > -
> > > -       `QEMU waiting for connection on:
> > > disconnected:unix:/path/to/socket,server`
> > > -
> > > -       QEMU will wait until the port is created sucessfully in OVS
> > to
> > > boot the
> > > -       VM.
> > > -
> > > -       One benefit of using this mode is the ability for vHost ports
> > > to
> > > -       'reconnect' in event of the switch crashing or being brought
> > > down. Once
> > > -       it is brought back up, the vHost ports will reconnect
> > > automatically and
> > > -       normal service will resume.
> > > -
> > >    - VM Configuration with libvirt
> > >
> > >      * change the user/group, access control policty and restart
> > > libvirtd.
> > > @@ -657,7 +629,49 @@ For users wanting to do packet forwarding using
> > > kernel stack below are the steps
> > >
> > >        Note: For information on libvirt and further tuning refer
> > > [libvirt].
> > >
> > > -### 6.2 DPDK backend inside VM
> > > +### 6.2 vhost-user-client
> > > +
> > > +  - Prerequisites:
> > > +
> > > +    QEMU version >= 2.7
> > > +
> > > +  - Adding vhost-user-client ports to Switch
> > > +
> > > +    ```
> > > +    ovs-vsctl add-port br0 vhost-client-1 -- set Interface vhost-
> > > client-1
> > > +    type=dpdkvhostuserclient options:vhost-server-
> > path=/path/to/socket
> > > +    ```
> > > +
> > > +    Unlike vhost-user ports, the name given to port does not govern
> > > the name of
> > > +    the socket device. 'vhost-server-path' reflects the full path of
> > > the socket
> > > +    that has been or will be created by QEMU for the given vHost
> > User
> > > client
> > > +    port.
> > > +
> > > +  - Adding vhost-user-client ports to VM
> > > +
> > > +    The same QEMU parameters as vhost-user ports described in
> > section
> > > 6.1 can
> > > +    be used, with one change necessary. One must append ',server' to
> > > the
> > > +    'chardev' arguments on the QEMU command line, to instruct QEMU
> > to
> > > use vHost
> > > +    server mode for a given interface, like so:
> > > +
> > > +    ````
> > > +    -chardev socket,id=char0,path=/path/to/socket,server
> > > +    ````
> > > +
> > > +    If the corresponding dpdkvhostuserclient port has not yet been
> > > configured
> > > +    in OVS with vhost-server-path=/path/to/socket, QEMU will print a
> > > log
> > > +    similar to the following:
> > > +
> > > +    `QEMU waiting for connection on:
> > > + disconnected:unix:/path/to/socket,server`
> > > +
> > > +    QEMU will wait until the port is created sucessfully in OVS to
> > > boot the VM.
> > > +
> > > +    One benefit of using this mode is the ability for vHost ports to
> > > +    'reconnect' in event of the switch crashing or being brought
> > down.
> > > Once it
> > > +    is brought back up, the vHost ports will reconnect automatically
> > > and normal
> > > +    service will resume.
> > > +
> > > +### 6.3 DPDK backend inside VM
> > >
> > >    Please note that additional configuration is required if you want
> > > to run
> > >    ovs-vswitchd with DPDK backend inside a QEMU virtual machine. Ovs-
> > > vswitchd diff --git a/NEWS b/NEWS index 12788b6..921887e 100644
> > > --- a/NEWS
> > > +++ b/NEWS
> > > @@ -81,6 +81,7 @@ v2.6.0 - xx xxx xxxx
> > >       * Jumbo frame support
> > >       * Remove dpdkvhostcuse port type.
> > >       * OVS client mode for vHost and vHost reconnect (Requires QEMU
> > > 2.7)
> > > +     * 'dpdkvhostuserclient' port type.
> > >     - Increase number of registers to 16.
> > >     - ovs-benchmark: This utility has been removed due to lack of use
> > > and
> > >       bitrot.
> > > diff --git a/lib/netdev-dpdk.c b/lib/netdev-dpdk.c index
> > > 6d334db..81aea2d 100644
> > > --- a/lib/netdev-dpdk.c
> > > +++ b/lib/netdev-dpdk.c
> > > @@ -356,9 +356,8 @@ struct netdev_dpdk {
> > >      /* True if vHost device is 'up' and has been reconfigured at
> > > least once */
> > >      bool vhost_reconfigured;
> > >
> > > -    /* Identifiers used to distinguish vhost devices from each
> > other.
> > > */
> > > -    char vhost_server_id[PATH_MAX];
> > > -    char vhost_client_id[PATH_MAX];
> > > +    /* Identifier used to distinguish vhost devices from each other.
> > > */
> > > +    char vhost_id[PATH_MAX];
> > >
> > >      /* In dpdk_list. */
> > >      struct ovs_list list_node OVS_GUARDED_BY(dpdk_mutex); @@ -814,8
> > > +813,6 @@ netdev_dpdk_init(struct netdev *netdev, unsigned int
> > > +port_no,
> > >      dev->max_packet_len = MTU_TO_FRAME_LEN(dev->mtu);
> > >      ovsrcu_index_init(&dev->vid, -1);
> > >      dev->vhost_reconfigured = false;
> > > -    /* initialise vHost port in server mode */
> > > -    dev->vhost_driver_flags &= ~RTE_VHOST_USER_CLIENT;
> > >
> > >      err = netdev_dpdk_mempool_configure(dev);
> > >      if (err) {
> > > @@ -878,16 +875,6 @@ dpdk_dev_parse_name(const char dev_name[], const
> > > char prefix[],
> > >      }
> > >  }
> > >
> > > -/* Returns a pointer to the relevant vHost socket ID depending on
> > the
> > > mode in
> > > - * use */
> > > -static char *
> > > -get_vhost_id(struct netdev_dpdk *dev)
> > > -    OVS_REQUIRES(dev->mutex)
> > > -{
> > > -    return dev->vhost_driver_flags & RTE_VHOST_USER_CLIENT ?
> > > -           dev->vhost_client_id : dev->vhost_server_id;
> > > -}
> > > -
> > >  static int
> > >  netdev_dpdk_vhost_construct(struct netdev *netdev)  { @@ -911,27
> > > +898,38 @@ netdev_dpdk_vhost_construct(struct netdev *netdev)
> > >
> > >      ovs_mutex_lock(&dpdk_mutex);
> > >      /* Take the name of the vhost-user port and append it to the
> > > location where
> > > -     * the socket is to be created, then register the socket.
> > Sockets
> > > are
> > > -     * registered initially in 'server' mode.
> > > +     * the socket is to be created, then register the socket.
> > >       */
> > > -    snprintf(dev->vhost_server_id, sizeof dev->vhost_server_id,
> > > "%s/%s",
> > > +    snprintf(dev->vhost_id, sizeof dev->vhost_id, "%s/%s",
> > >               vhost_sock_dir, name);
> > >
> > > -    err = rte_vhost_driver_register(dev->vhost_server_id,
> > > -                                    dev->vhost_driver_flags);
> > > +    dev->vhost_driver_flags &= ~RTE_VHOST_USER_CLIENT;
> > > +    err = rte_vhost_driver_register(dev->vhost_id,
> > > + dev->vhost_driver_flags);
> > >      if (err) {
> > >          VLOG_ERR("vhost-user socket device setup failure for socket
> > > %s\n",
> > > -                 dev->vhost_server_id);
> > > +                 dev->vhost_id);
> > >      } else {
> > > -        if (!(dev->vhost_driver_flags & RTE_VHOST_USER_CLIENT)) {
> > > -            /* OVS server mode - add this socket to list for
> > deletion
> > > */
> > > -            fatal_signal_add_file_to_unlink(dev->vhost_server_id);
> > > -            VLOG_INFO("Socket %s created for vhost-user port %s\n",
> > > -                      dev->vhost_server_id, name);
> > > -        }
> > > -        err = netdev_dpdk_init(netdev, -1, DPDK_DEV_VHOST);
> > > +        fatal_signal_add_file_to_unlink(dev->vhost_id);
> > > +        VLOG_INFO("Socket %s created for vhost-user port %s\n",
> > > +                  dev->vhost_id, name);
> > > +    }
> > > +    err = netdev_dpdk_init(netdev, -1, DPDK_DEV_VHOST);
> > > +
> > > +    ovs_mutex_unlock(&dpdk_mutex);
> > > +    return err;
> > > +}
> > > +
> > > +static int
> > > +netdev_dpdk_vhost_client_construct(struct netdev *netdev) {
> > > +    int err;
> > > +
> > > +    if (rte_eal_init_ret) {
> > > +        return rte_eal_init_ret;
> > >      }
> > >
> > > +    ovs_mutex_lock(&dpdk_mutex);
> > > +    err = netdev_dpdk_init(netdev, -1, DPDK_DEV_VHOST);
> > >      ovs_mutex_unlock(&dpdk_mutex);
> > >      return err;
> > >  }
> > > @@ -1005,8 +1003,7 @@ netdev_dpdk_vhost_destruct(struct netdev
> > *netdev)
> > >          VLOG_ERR("Removing port '%s' while vhost device still
> > > attached.",
> > >                   netdev->name);
> > >          VLOG_ERR("To restore connectivity after re-adding of port,
> > VM
> > > on socket"
> > > -                 " '%s' must be restarted.",
> > > -                 get_vhost_id(dev));
> > > +                 " '%s' must be restarted.", dev->vhost_id);
> > >      }
> > >
> > >      free(ovsrcu_get_protected(struct ingress_policer *, @@ -1016,7
> > > +1013,7 @@ netdev_dpdk_vhost_destruct(struct netdev *netdev)
> > >      ovs_list_remove(&dev->list_node);
> > >      dpdk_mp_put(dev->dpdk_mp);
> > >
> > > -    vhost_id = xstrdup(get_vhost_id(dev));
> > > +    vhost_id = xstrdup(dev->vhost_id);
> > >
> > >      ovs_mutex_unlock(&dev->mutex);
> > >      ovs_mutex_unlock(&dpdk_mutex);
> > > @@ -1108,15 +1105,16 @@ netdev_dpdk_ring_set_config(struct netdev
> > > *netdev, const struct smap *args)  }
> > >
> > >  static int
> > > -netdev_dpdk_vhost_set_config(struct netdev *netdev, const struct
> > smap
> > > *args)
> > > +netdev_dpdk_vhost_client_set_config(struct netdev *netdev,
> > > +                                    const struct smap *args)
> > >  {
> > >      struct netdev_dpdk *dev = netdev_dpdk_cast(netdev);
> > >      const char *path;
> > >
> > >      if (!(dev->vhost_driver_flags & RTE_VHOST_USER_CLIENT)) {
> > >          path = smap_get(args, "vhost-server-path");
> > > -        if (path && strcmp(path, dev->vhost_client_id)) {
> > > -            strcpy(dev->vhost_client_id, path);
> > > +        if (path && strcmp(path, dev->vhost_id)) {
> > > +            strcpy(dev->vhost_id, path);
> > >              netdev_request_reconfigure(netdev);
> > >          }
> > >      }
> > > @@ -2302,7 +2300,7 @@ netdev_dpdk_remap_txqs(struct netdev_dpdk *dev)
> > >          }
> > >      }
> > >
> > > -    VLOG_DBG("TX queue mapping for %s\n", get_vhost_id(dev));
> > > +    VLOG_DBG("TX queue mapping for %s\n", dev->vhost_id);
> > >      for (i = 0; i < total_txqs; i++) {
> > >          VLOG_DBG("%2d --> %2d", i, dev->tx_q[i].map);
> > >      }
> > > @@ -2327,7 +2325,7 @@ new_device(int vid)
> > >      /* Add device to the vhost port with the same name as that
> > passed
> > > down. */
> > >      LIST_FOR_EACH(dev, list_node, &dpdk_list) {
> > >          ovs_mutex_lock(&dev->mutex);
> > > -        if (strncmp(ifname, get_vhost_id(dev), IF_NAME_SZ) == 0) {
> > > +        if (strncmp(ifname, dev->vhost_id, IF_NAME_SZ) == 0) {
> > >              uint32_t qp_num = rte_vhost_get_queue_num(vid);
> > >
> > >              /* Get NUMA information */ @@ -2456,7 +2454,7 @@
> > > vring_state_changed(int vid, uint16_t queue_id, int enable)
> > >      ovs_mutex_lock(&dpdk_mutex);
> > >      LIST_FOR_EACH (dev, list_node, &dpdk_list) {
> > >          ovs_mutex_lock(&dev->mutex);
> > > -        if (strncmp(ifname, get_vhost_id(dev), IF_NAME_SZ) == 0) {
> > > +        if (strncmp(ifname, dev->vhost_id, IF_NAME_SZ) == 0) {
> > >              if (enable) {
> > >                  dev->tx_q[qid].map = qid;
> > >              } else {
> > > @@ -2949,17 +2947,11 @@ out:
> > >      return err;
> > >  }
> > >
> > > -static int
> > > -netdev_dpdk_vhost_reconfigure(struct netdev *netdev)
> > > +static void
> > > +dpdk_vhost_reconfigure_helper(struct netdev_dpdk *dev)
> > >  {
> > > -    struct netdev_dpdk *dev = netdev_dpdk_cast(netdev);
> > > -    int err = 0;
> > > -
> > > -    ovs_mutex_lock(&dpdk_mutex);
> > > -    ovs_mutex_lock(&dev->mutex);
> > > -
> > > -    netdev->n_txq = dev->requested_n_txq;
> > > -    netdev->n_rxq = dev->requested_n_rxq;
> > > +    dev->up.n_txq = dev->requested_n_txq;
> > > +    dev->up.n_rxq = dev->requested_n_rxq;
> > >
> > >      /* Enable TX queue 0 by default if it wasn't disabled. */
> > >      if (dev->tx_q[0].map == OVS_VHOST_QUEUE_MAP_UNKNOWN) { @@
> > > -2971,50
> > > +2963,61 @@ netdev_dpdk_vhost_reconfigure(struct netdev *netdev)
> > >      if (dev->requested_socket_id != dev->socket_id
> > >          || dev->requested_mtu != dev->mtu) {
> > >          if (!netdev_dpdk_mempool_configure(dev)) {
> > > -            netdev_change_seq_changed(netdev);
> > > +            netdev_change_seq_changed(&dev->up);
> > >          }
> > >      }
> > >
> > >      if (netdev_dpdk_get_vid(dev) >= 0) {
> > >          dev->vhost_reconfigured = true;
> > >      }
> > > +}
> > > +
> > > +static int
> > > +netdev_dpdk_vhost_reconfigure(struct netdev *netdev) {
> > > +    struct netdev_dpdk *dev = netdev_dpdk_cast(netdev);
> > > +
> > > +    ovs_mutex_lock(&dpdk_mutex);
> > > +    ovs_mutex_lock(&dev->mutex);
> > > +
> > > +    dpdk_vhost_reconfigure_helper(dev);
> > > +
> > > +    ovs_mutex_unlock(&dev->mutex);
> > > +    ovs_mutex_unlock(&dpdk_mutex);
> > > +
> > > +    return 0;
> > > +}
> > > +
> > > +static int
> > > +netdev_dpdk_vhost_client_reconfigure(struct netdev *netdev) {
> > > +    struct netdev_dpdk *dev = netdev_dpdk_cast(netdev);
> > > +    int err = 0;
> > > +
> > > +    ovs_mutex_lock(&dpdk_mutex);
> > > +    ovs_mutex_lock(&dev->mutex);
> > > +
> > > +    dpdk_vhost_reconfigure_helper(dev);
> > >
> > >      /* Configure vHost client mode if requested and if the following
> > > criteria
> > >       * are met:
> > > -     *  1. Device is currently in 'server' mode.
> > > -     *  2. Device is currently not active.
> > > -     *  3. A path has been specified.
> > > +     *  1. Device hasn't been registered yet.
> > > +     *  2. A path has been specified.
> > >       */
> > >      if (!(dev->vhost_driver_flags & RTE_VHOST_USER_CLIENT)
> > > -            && !(netdev_dpdk_get_vid(dev) >= 0)
> > > -            && strlen(dev->vhost_client_id)) {
> > > -        /* Unregister server-mode device */
> > > -        char *vhost_id = xstrdup(get_vhost_id(dev));
> > > -
> > > -        ovs_mutex_unlock(&dev->mutex);
> > > -        ovs_mutex_unlock(&dpdk_mutex);
> > > -        err = dpdk_vhost_driver_unregister(dev, vhost_id);
> > > -        free(vhost_id);
> > > -        ovs_mutex_lock(&dpdk_mutex);
> > > -        ovs_mutex_lock(&dev->mutex);
> > > +            && strlen(dev->vhost_id)) {
> > > +        /* Register client-mode device */
> > > +        err = rte_vhost_driver_register(dev->vhost_id,
> > > +                                        RTE_VHOST_USER_CLIENT);
> > >          if (err) {
> > > -            VLOG_ERR("Unable to remove vhost-user socket %s",
> > > -                     get_vhost_id(dev));
> > > +            VLOG_ERR("vhost-user device setup failure for device
> > > %s\n",
> > > +                    dev->vhost_id);
> > >          } else {
> > > -            fatal_signal_remove_file_to_unlink(get_vhost_id(dev));
> > > -            /* Register client-mode device */
> > > -            err = rte_vhost_driver_register(dev->vhost_client_id,
> > > -                                            RTE_VHOST_USER_CLIENT);
> > > -            if (err) {
> > > -                VLOG_ERR("vhost-user device setup failure for device
> > > %s\n",
> > > -                        dev->vhost_client_id);
> > > -            } else {
> > > -                /* Configuration successful */
> > > -                dev->vhost_driver_flags |= RTE_VHOST_USER_CLIENT;
> > > -                VLOG_INFO("vHost User device '%s' changed to
> > 'client'
> > > mode, "
> > > -                          "using client socket '%s'",
> > > -                           dev->up.name, get_vhost_id(dev));
> > > -            }
> > > +            /* Configuration successful */
> > > +            dev->vhost_driver_flags |= RTE_VHOST_USER_CLIENT;
> > > +            VLOG_INFO("vHost User device '%s' created in 'client'
> > > mode, "
> > > +                      "using client socket '%s'",
> > > +                      dev->up.name, dev->vhost_id);
> > >          }
> > >      }
> > >
> > > @@ -3521,7 +3524,7 @@ static const struct netdev_class
> > > dpdk_vhost_class =
> > >          "dpdkvhostuser",
> > >          netdev_dpdk_vhost_construct,
> > >          netdev_dpdk_vhost_destruct,
> > > -        netdev_dpdk_vhost_set_config,
> > > +        NULL,
> > >          NULL,
> > >          netdev_dpdk_vhost_send,
> > >          netdev_dpdk_vhost_get_carrier, @@ -3530,6 +3533,20 @@ static
> > > const struct netdev_class dpdk_vhost_class =
> > >          NULL,
> > >          netdev_dpdk_vhost_reconfigure,
> > >          netdev_dpdk_vhost_rxq_recv);
> > > +static const struct netdev_class dpdk_vhost_client_class =
> > > +    NETDEV_DPDK_CLASS(
> > > +        "dpdkvhostuserclient",
> > > +        netdev_dpdk_vhost_client_construct,
> > > +        netdev_dpdk_vhost_destruct,
> > > +        netdev_dpdk_vhost_client_set_config,
> > > +        NULL,
> > > +        netdev_dpdk_vhost_send,
> > > +        netdev_dpdk_vhost_get_carrier,
> > > +        netdev_dpdk_vhost_get_stats,
> > > +        NULL,
> > > +        NULL,
> > > +        netdev_dpdk_vhost_client_reconfigure,
> > > +        netdev_dpdk_vhost_rxq_recv);
> > >
> > >  void
> > >  netdev_dpdk_register(void)
> > > @@ -3538,6 +3555,7 @@ netdev_dpdk_register(void)
> > >      netdev_register_provider(&dpdk_class);
> > >      netdev_register_provider(&dpdk_ring_class);
> > >      netdev_register_provider(&dpdk_vhost_class);
> > > +    netdev_register_provider(&dpdk_vhost_client_class);
> > >  }
> > >
> > >  void
> > > diff --git a/vswitchd/vswitch.xml b/vswitchd/vswitch.xml index
> > > 69b5592..5b9689a 100644
> > > --- a/vswitchd/vswitch.xml
> > > +++ b/vswitchd/vswitch.xml
> > > @@ -2370,11 +2370,9 @@
> > >        <column name="options" key="vhost-server-path"
> > >                type='{"type": "string"}'>
> > >          <p>
> > > -          When specified, switches the given port permanently to
> > > 'client'
> > > -          mode. The value specifies the path to the socket
> > associated
> > > with a
> > > -          vHost User client mode device that has been or will be
> > > created by
> > > -          QEMU.
> > > -          Only supported by DPDK vHost interfaces.
> > > +          The value specifies the path to the socket associated with
> > > + a
> > > vHost
> > > +          User client mode device that has been or will be created
> > by
> > > QEMU.
> > > +          Only supported by dpdkvhostuserclient interfaces.
> > >          </p>
> > >        </column>
> > >      </group>
> > > --
> > > 2.4.3
>
> _______________________________________________
> dev mailing list
> dev at openvswitch.org
> http://openvswitch.org/mailman/listinfo/dev
>



More information about the dev mailing list