[ovs-dev] [PATCH v2 1/4] netdev-dpdk: fix mempool management with vhu client.

Kevin Traynor ktraynor at redhat.com
Mon Oct 2 17:38:17 UTC 2017


On 09/28/2017 03:28 PM, antonio.fischetti at intel.com wrote:
> From: Antonio Fischetti <antonio.fischetti at intel.com>
> 
> In a PVP test where vhostuser ports are configured as
> clients, OvS crashes when QEMU is launched.
> This patch avoids the repeated calls to netdev_change_seq_changed
> after the requested mempool is already acquired.
> 

Can you explain what is happening in this bug? I can't reproduce it and
the mempool for vhost ports should only be reconfigured if the number of
queues or socket has changed.

> CC: Kevin Traynor <ktraynor at redhat.com>
> CC: Aaron Conole <aconole at redhat.com>
> Reported-by: Ciara Loftus <ciara.loftus at intel.com>
> Fixes: d555d9bded5f ("netdev-dpdk: Create separate memory pool for each port.")
> Signed-off-by: Antonio Fischetti <antonio.fischetti at intel.com>
> ---
> To replicate the bug scenario:
> 
>  PVP test setup
>  --------------
> CLIENT_SOCK_DIR=/tmp
> SOCK0=dpdkvhostuser0
> SOCK1=dpdkvhostuser1
> 
> 1 PMD
> Add 2 dpdk ports, n_rxq=1
> Add 2 vhu ports both of type dpdkvhostuserclient and specify vhost-server-path
>  ovs-vsctl set Interface dpdkvhostuser0 options:vhost-server-path="$CLIENT_SOCK_DIR/$SOCK0"
>  ovs-vsctl set Interface dpdkvhostuser1 options:vhost-server-path="$CLIENT_SOCK_DIR/$SOCK1"
> 
> Set port-based rules: dpdk0 <--> vhu0 and dpdk1 <--> vhu1
>  add-flow br0 in_port=1,action=output:3
>  add-flow br0 in_port=3,action=output:1
>  add-flow br0 in_port=4,action=output:2
>  add-flow br0 in_port=2,action=output:4
> 
>  Launch QEMU
>  -----------
> As OvS vhu ports are acting as clients, we must specify 'server' in the next command.
> VM_IMAGE=<path/to/your/vm/image>
> 
>  sudo -E taskset 0x3F00 $QEMU_DIR/x86_64-softmmu/qemu-system-x86_64 -name us-vhost-vm1 -cpu host -enable-kvm -m 4096M -object memory-backend-file,id=mem,size=4096M,mem-path=/dev/hugepages,share=on -numa node,memdev=mem -mem-prealloc -smp 4 -drive file=$VM_IMAGE -chardev socket,id=char0,path=$CLIENT_SOCK_DIR/$SOCK0,server -netdev type=vhost-user,id=mynet1,chardev=char0,vhostforce -device virtio-net-pci,mac=00:00:00:00:00:01,netdev=mynet1,mrg_rxbuf=off -chardev socket,id=char1,path=$CLIENT_SOCK_DIR/$SOCK1,server -netdev type=vhost-user,id=mynet2,chardev=char1,vhostforce -device virtio-net-pci,mac=00:00:00:00:00:02,netdev=mynet2,mrg_rxbuf=off --nographic
> 
>  Expected behavior
>  -----------------
> With this fix OvS shouldn't crash.
> ---
>  lib/netdev-dpdk.c | 12 ++++++++----
>  1 file changed, 8 insertions(+), 4 deletions(-)
> 
> diff --git a/lib/netdev-dpdk.c b/lib/netdev-dpdk.c
> index c60f46f..dda3771 100644
> --- a/lib/netdev-dpdk.c
> +++ b/lib/netdev-dpdk.c
> @@ -621,6 +621,10 @@ netdev_dpdk_mempool_configure(struct netdev_dpdk *dev)
>      uint32_t buf_size = dpdk_buf_size(dev->requested_mtu);
>      struct dpdk_mp *mp;
>  
> +    if (dev->requested_socket_id == dev->socket_id
> +        && dev->requested_mtu == dev->mtu) {
> +        return EEXIST;
> +    }

But you would want to get a new mempool if the number of queues have
changed, as that is part of the calculation for the size of the mempool.
MIN_NB_MBUF was added to over provision as a safety so you'd probably
get away with it but you should be requesting a new mempool with the
correctly calculated num of mbufs.

It seems like this code is trying to add back in the code to prevent
rte_pktmbuf_pool_create being called again for the same values. In the
patchset that this fixes it is removed and the EEXISTS return from
rte_pktmbuf_pool_create are handled instead. I'm not sure that both
mechanisms are needed.

>      mp = dpdk_mp_get(dev, FRAME_LEN_TO_MTU(buf_size));
>      if (!mp) {
>          VLOG_ERR("Failed to create memory pool for netdev "
> @@ -3207,7 +3211,7 @@ netdev_dpdk_reconfigure(struct netdev *netdev)
>      rte_eth_dev_stop(dev->port_id);
>  
>      err = netdev_dpdk_mempool_configure(dev);
> -    if (err) {
> +    if (err && err != EEXIST) {
>          goto out;
>      }
>  
> @@ -3247,10 +3251,10 @@ dpdk_vhost_reconfigure_helper(struct netdev_dpdk *dev)
>      netdev_dpdk_remap_txqs(dev);
>  
>      err = netdev_dpdk_mempool_configure(dev);
> -    if (err) {
> -        return err;
> -    } else {
> +    if (!err) {
>          netdev_change_seq_changed(&dev->up);
> +    } else if (err != EEXIST){
> +        return err;
>      }
>  
>      if (netdev_dpdk_get_vid(dev) >= 0) {
> 



More information about the dev mailing list