[ovs-discuss] Poor performance when using OVS with DPDK

Vipul Ujawane vipul999ujawane at gmail.com
Wed Jun 24 10:56:43 UTC 2020

Dear all,

I am observing a very low performance when running OVS-DPDK when compared
to OVS running with the Kernel Datapath.
I have OvS version 2.13.90 compiled from source with the latest stable DPDK
v19.11.3 on a stable Debian system running kernel 4.19.0-9-amd64 (real

I have tried to use the latest released OvS as well (2.12) with the same
LTS DPDK. As a last resort, I have tried an older kernel, whether it has
any problem (4.19.0-8-amd64 (real version:4.19.98)).

I have not been able to troubleshoot the problem, and kindly request your
help regarding the same.

HW configuration
We have to two totally identical servers (Debian stable, Intel(R) Xeon(R)
Gold 6230 CPU, 96G Mem), each runs KVM virtual machine. On the hypervisor
layer, we have OvS for traffic routing. The servers are connected directly
via a Mellanox ConnectX-5 (1x100G).
OVS Forwarding tables are configured for simple port-forwarding only to
avoid any packet processing-related issue.

When both servers are running OVS-Kernel at the hypervisor layer and VMs
are connected to it via libvirt and virtio interfaces, the
VM->Server1->Server2->VM throughput is around 16-18Gbps.
However, when using OVS-DPDK with the same setting, the throughput drops
down to 4-6Gbps.

SW/driver configurations:
In config common_base, besides the defaults, I have enabled the following
extra drivers/features to be compiled/enabled.

$ovs-vswitchd --version
ovs-vswitchd (Open vSwitch) 2.13.90

$sudo ovs-vsctl get Open_vSwitch . dpdk_initialized

$sudo ovs-vsctl get Open_vSwitch . dpdk_version
"DPDK 19.11.3"

OS settings
$ lsb_release -a
No LSB modules are available.
Distributor ID: Debian
Description: Debian GNU/Linux 10 (buster)
Release: 10
Codename: buster

$ cat /proc/cmdline
BOOT_IMAGE=/vmlinuz-4.19.0-9-amd64 root=/dev/mapper/Volume0-debian--stable
ro default_hugepagesz=1G hugepagesz=1G hugepages=16 intel_iommu=on iommu=pt

./usertools/dpdk-devbind.py --status
Network devices using kernel driver
0000:b3:00.0 'MT27800 Family [ConnectX-5] 1017' if=ens2 drv=mlx5_core

Due to the way how Mellanox cards and their driver work, I have not bond
igb_uio to the interface, however, uio, igb_uio and vfio-pci kernel modules
are loaded.

Relevant part of the VM-config for Qemu/KVM
    <vcpupin vcpu='0' cpuset='4'/>
    <vcpupin vcpu='1' cpuset='5'/>
    <emulatorpin cpuset='4-5'/>
  <cpu mode='host-model' check='partial'>
    <model fallback='allow'/>
    <topology sockets='2' cores='1' threads='1'/>
      <cell id='0' cpus='0-1' memory='4194304' unit='KiB'
    <interface type='vhostuser'>
      <mac address='00:00:00:00:00:aa'/>
      <source type='unix' path='/usr/local/var/run/openvswitch/vhostuser'
      <model type='virtio'/>
      <driver queues='2'>
        <host mrg_rxbuf='on'/>
      <address type='pci' domain='0x0000' bus='0x07' slot='0x00'

OVS Start Config
ovs-vsctl --no-wait set Open_vSwitch . other_config:dpdk-init=true
ovs-vsctl --no-wait set Open_vSwitch . other_config:dpdk-socket-mem="4096,0"
ovs-vsctl --no-wait set Open_vSwitch . other_config:dpdk-lcore-mask=0xff
ovs-vsctl --no-wait set Open_vSwitch . other_config:pmd-cpu-mask=0e
ovs-vsctl add-port ovsbr dpdk0 -- set Interface dpdk0 type=dpdk
ovs-vsctl set interface dpdk0 options:n_rxq=2
ovs-vsctl add-port ovsbr vhost-vm -- set Interface vhostuser

$cat /proc/cmdline
BOOT_IMAGE=/vmlinuz-4.19.0-9-amd64 root=/dev/mapper/Volume0-debian--stable
ro default_hugepagesz=1G hugepagesz=1G hugepages=16 intel_iommu=on iommu=pt

Is there anything I should be aware of the versions and setting I am using?
Did I compile DPDK and/or OvS in a wrong way?

Thank you for your kind help ;)


Vipul Ujawane
-------------- next part --------------
An HTML attachment was scrubbed...
URL: <http://mail.openvswitch.org/pipermail/ovs-discuss/attachments/20200624/12fab6ba/attachment.html>

More information about the discuss mailing list