[ovs-dev] [PATCH 1/4] netdev-dpdk: fix mempool management with vhu client.

antonio.fischetti at intel.com antonio.fischetti at intel.com
Tue Sep 26 15:04:08 UTC 2017


In a PVP test where vhostuser ports are configured as
clients, OvS crashes when QEMU is launched.
This patch avoids the repeated calls to netdev_change_seq_changed
after the requested mempool is already acquired.

CC: Ciara Loftus <ciara.loftus at intel.com>
CC: Kevin Traynor <ktraynor at redhat.com>
CC: Aaron Conole <aconole at redhat.com>
Fixes: d555d9bded5f ("netdev-dpdk: Create separate memory pool for each port.")
Signed-off-by: Antonio Fischetti <antonio.fischetti at intel.com>
---
To replicate the bug scenario:

 PVP test setup
 --------------
CLIENT_SOCK_DIR=/tmp
SOCK0=dpdkvhostuser0
SOCK1=dpdkvhostuser1

1 PMD
Add 2 dpdk ports, n_rxq=1
Add 2 vhu ports both of type dpdkvhostuserclient and specify vhost-server-path
 ovs-vsctl set Interface dpdkvhostuser0 options:vhost-server-path="$CLIENT_SOCK_DIR/$SOCK0"
 ovs-vsctl set Interface dpdkvhostuser1 options:vhost-server-path="$CLIENT_SOCK_DIR/$SOCK1"

Set port-based rules: dpdk0 <--> vhu0 and dpdk1 <--> vhu1
 add-flow br0 in_port=1,action=output:3
 add-flow br0 in_port=3,action=output:1
 add-flow br0 in_port=4,action=output:2
 add-flow br0 in_port=2,action=output:4

 Launch QEMU
 -----------
As OvS vhu ports are acting as clients, we must specify 'server' in the next command.
VM_IMAGE=<path/to/your/vm/image>

 sudo -E taskset 0x3F00 $QEMU_DIR/x86_64-softmmu/qemu-system-x86_64 -name us-vhost-vm1 -cpu host -enable-kvm -m 4096M -object memory-backend-file,id=mem,size=4096M,mem-path=/dev/hugepages,share=on -numa node,memdev=mem -mem-prealloc -smp 4 -drive file=$VM_IMAGE -chardev socket,id=char0,path=$CLIENT_SOCK_DIR/$SOCK0,server -netdev type=vhost-user,id=mynet1,chardev=char0,vhostforce -device virtio-net-pci,mac=00:00:00:00:00:01,netdev=mynet1,mrg_rxbuf=off -chardev socket,id=char1,path=$CLIENT_SOCK_DIR/$SOCK1,server -netdev type=vhost-user,id=mynet2,chardev=char1,vhostforce -device virtio-net-pci,mac=00:00:00:00:00:02,netdev=mynet2,mrg_rxbuf=off --nographic

 Expected behavior
 -----------------
With this fix OvS shouldn't crash.
---
 lib/netdev-dpdk.c | 16 ++++++++++------
 1 file changed, 10 insertions(+), 6 deletions(-)

diff --git a/lib/netdev-dpdk.c b/lib/netdev-dpdk.c
index 648d719..2f5ec71 100644
--- a/lib/netdev-dpdk.c
+++ b/lib/netdev-dpdk.c
@@ -549,7 +549,7 @@ dpdk_mp_create(struct netdev_dpdk *dev, int mtu)
                                           dmp->socket_id);
         if (dmp->mp) {
             VLOG_DBG("Allocated \"%s\" mempool with %u mbufs", mp_name,
-                     dmp->mp_size);
+                    dmp->mp_size);
         } else if (rte_errno == EEXIST) {
             /* A mempool with the same name already exists.  We just
              * retrieve its pointer to be returned to the caller. */
@@ -571,6 +571,7 @@ dpdk_mp_create(struct netdev_dpdk *dev, int mtu)
              * initializes some OVS specific fields of dp_packet.
              */
             rte_mempool_obj_iter(dmp->mp, ovs_rte_pktmbuf_init, NULL);
+
             return dmp;
         }
     } while (!mp_exists &&
@@ -3247,11 +3248,14 @@ dpdk_vhost_reconfigure_helper(struct netdev_dpdk *dev)
 
     netdev_dpdk_remap_txqs(dev);
 
-    err = netdev_dpdk_mempool_configure(dev);
-    if (err) {
-        return err;
-    } else {
-        netdev_change_seq_changed(&dev->up);
+    if (dev->requested_socket_id != dev->socket_id
+        || dev->requested_mtu != dev->mtu) {
+        err = netdev_dpdk_mempool_configure(dev);
+        if (err) {
+            return err;
+        } else {
+            netdev_change_seq_changed(&dev->up);
+        }
     }
 
     if (netdev_dpdk_get_vid(dev) >= 0) {
-- 
2.4.11



More information about the dev mailing list