[ovs-discuss] [dpdk-users] Poor performance when using OVS with DPDK

madhukar mythri madhukar.mythri at gmail.com
Fri Jul 3 16:45:41 UTC 2020


Hi,


1) Can you try by isolating the CPU's, which you had assigned to the host
DPDK-PMD: 0xe as per your configuration, by updating these three kernel
parameters: "isolcpus=1-3 nohz_full=1-3 rcu_nocbs=1-3".

2) Better to set CPU affinity for the two queues, using the following
command:
]# ovs-vsctl set interface dpdk0options:n_rxq=8
other_config:pmd-rxq-affinity="0:2,1:3"

3) cross check your PCI NIC NUMA node, as per the details mentioned it as
'0'.

4) In the VM xml file, better to  reserve/allocate 1G hugepages, from the
respective NUMA node as follows:
=========
<memoryBacking>
    <hugepages>
      <page size="1" unit="G" nodeset="0"/>
    </hugepages>
   <locked/>
  </memoryBacking>
=========


Regards,
Madhukar.


On Tue, Jun 30, 2020 at 1:04 PM Vipul Ujawane <vipul999ujawane at gmail.com>
wrote:

> Dear all,
>
> I am observing a very low performance when running OVS-DPDK when compared
> to OVS running with the Kernel Datapath.
> I have OvS version 2.13.90 compiled from source with the latest stable DPDK
> v19.11.3 on a stable Debian system running kernel 4.19.0-9-amd64 (real
> version:4.19.118).
>
> I have tried to use the latest released OvS as well (2.12) with the same
> LTS DPDK. As a last resort, I have tried an older kernel, whether it has
> any problem (4.19.0-8-amd64 (real version:4.19.98)).
>
> I have not been able to troubleshoot the problem, and kindly request your
> help regarding the same.
>
> HW configuration
> ================
> We have to two totally identical servers (Debian stable, Intel(R) Xeon(R)
> Gold 6230 CPU, 96G Mem), each runs KVM virtual machine. On the hypervisor
> layer, we have OvS for traffic routing. The servers are connected directly
> via a Mellanox ConnectX-5 (1x100G).
> OVS Forwarding tables are configured for simple port-forwarding only to
> avoid any packet processing-related issue.
>
> Problem
> =======
> When both servers are running OVS-Kernel at the hypervisor layer and VMs
> are connected to it via libvirt and virtio interfaces, the
> VM->Server1->Server2->VM throughput is around 16-18Gbps.
> However, when using OVS-DPDK with the same setting, the throughput drops
> down to 4-6Gbps.
>
>
> SW/driver configurations:
> ==================
> DPDK
> ----
> In config common_base, besides the defaults, I have enabled the following
> extra drivers/features to be compiled/enabled.
> CONFIG_RTE_LIBRTE_MLX5_PMD=y
> CONFIG_RTE_LIBRTE_VHOST=y
> CONFIG_RTE_LIBRTE_VHOST_NUMA=y
> CONFIG_RTE_LIBRTE_PMD_VHOST=y
> CONFIG_RTE_VIRTIO_USER=n
> CONFIG_RTE_EAL_VFIO=y
>
>
> OVS
> ---
> $ovs-vswitchd --version
> ovs-vswitchd (Open vSwitch) 2.13.90
>
> $sudo ovs-vsctl get Open_vSwitch . dpdk_initialized
> true
>
> $sudo ovs-vsctl get Open_vSwitch . dpdk_version
> "DPDK 19.11.3"
>
> OS settings
> -----------
> $ lsb_release -a
> No LSB modules are available.
> Distributor ID: Debian
> Description: Debian GNU/Linux 10 (buster)
> Release: 10
> Codename: buster
>
>
> $ cat /proc/cmdline
> BOOT_IMAGE=/vmlinuz-4.19.0-9-amd64 root=/dev/mapper/Volume0-debian--stable
> ro default_hugepagesz=1G hugepagesz=1G hugepages=16 intel_iommu=on iommu=pt
> quiet
>
> ./usertools/dpdk-devbind.py --status
> Network devices using kernel driver
> ===================================
> 0000:b3:00.0 'MT27800 Family [ConnectX-5] 1017' if=ens2 drv=mlx5_core
> unused=igb_uio,vfio-pci
>
> Due to the way how Mellanox cards and their driver work, I have not bond
> igb_uio to the interface, however, uio, igb_uio and vfio-pci kernel modules
> are loaded.
>
>
> Relevant part of the VM-config for Qemu/KVM
> -------------------------------------------
>   <cputune>
>     <shares>4096</shares>
>     <vcpupin vcpu='0' cpuset='4'/>
>     <vcpupin vcpu='1' cpuset='5'/>
>     <emulatorpin cpuset='4-5'/>
>   </cputune>
>   <cpu mode='host-model' check='partial'>
>     <model fallback='allow'/>
>     <topology sockets='2' cores='1' threads='1'/>
>     <numa>
>       <cell id='0' cpus='0-1' memory='4194304' unit='KiB'
> memAccess='shared'/>
>     </numa>
>   </cpu>
>     <interface type='vhostuser'>
>       <mac address='00:00:00:00:00:aa'/>
>       <source type='unix' path='/usr/local/var/run/openvswitch/vhostuser'
> mo$
>       <model type='virtio'/>
>       <driver queues='2'>
>         <host mrg_rxbuf='on'/>
>       </driver>
>       <address type='pci' domain='0x0000' bus='0x07' slot='0x00'
> function='0x0'$
>     </interface>
>
> -----------------------------------
> OVS Start Config
> -----------------------------------
> ovs-vsctl --no-wait set Open_vSwitch . other_config:dpdk-init=true
> ovs-vsctl --no-wait set Open_vSwitch .
> other_config:dpdk-socket-mem="4096,0"
> ovs-vsctl --no-wait set Open_vSwitch . other_config:dpdk-lcore-mask=0xff
> ovs-vsctl --no-wait set Open_vSwitch . other_config:pmd-cpu-mask=0e
> ovs-vsctl add-port ovsbr dpdk0 -- set Interface dpdk0 type=dpdk
> options:dpdk-devargs=0000:b3:00.0
> ovs-vsctl set interface dpdk0 options:n_rxq=2
> ovs-vsctl add-port ovsbr vhost-vm -- set Interface vhostuser
> type=dpdkvhostuser
>
>
>
> -------------------------------------------------------
> $cat /proc/cmdline
> BOOT_IMAGE=/vmlinuz-4.19.0-9-amd64 root=/dev/mapper/Volume0-debian--stable
> ro default_hugepagesz=1G hugepagesz=1G hugepages=16 intel_iommu=on iommu=pt
> quiet
>
>
> Is there anything I should be aware of the versions and setting I am using?
> Did I compile DPDK and/or OvS in a wrong way?
>
> Thank you for your kind help ;)
>
> --
>
> Vipul Ujawane
>
-------------- next part --------------
An HTML attachment was scrubbed...
URL: <http://mail.openvswitch.org/pipermail/ovs-discuss/attachments/20200703/1d96c417/attachment.html>


More information about the discuss mailing list