[ovs-discuss] OVS-DPDK vhostuserclient ports dropping packets - Ignoring NUMA assignment

Joan Vidal Joan.Vidal at omniaccess.com
Tue May 28 08:27:29 UTC 2019


Hi,

I´m trying to use OVS vhostuserclient ports connected to a libvirt guest that in turn uses DPDK to manage the interfaces.
When trying to send traffic to the guest OVS is dropping the packets, so far the only strange thing I found is incorrect assignment of NUMA PMD threads on OVS logs.

OVS interfaces vhost0 and vhost1 connected to a KVM guest. Used  other_config to pin PMD threads 10 and 26  on NUMA1 to vhost0 and vhost1 interfaces.

During guest initialization, OVS log shows correct PMD threat assignment to NUMA1 but then interfaces are removed and added to the wrong NUMA0

2019-05-27T16:08:11.080Z|00662|netdev_dpdk|INFO|vHost Device '/var/run/openvswitch/vhost0' has been added on numa node 1
2019-05-27T16:08:11.081Z|00679|netdev_dpdk|INFO|vHost Device '/var/run/openvswitch/vhost1' has been added on numa node 1
2019-05-27T16:10:05.744Z|00737|netdev_dpdk|INFO|vHost Device '/var/run/openvswitch/vhost0' has been removed
019-05-27T16:10:06.585Z|00742|netdev_dpdk|INFO|vHost Device '/var/run/openvswitch/vhost1' has been removed
2019-05-27T16:10:11.856Z|00775|netdev_dpdk|INFO|vHost Device '/var/run/openvswitch/vhost0' has been added on numa node 0
2019-05-27T16:10:11.857Z|00804|netdev_dpdk|INFO|vHost Device '/var/run/openvswitch/vhost1' has been added on numa node 0

According to ovs-appctl threads 10 and 26 are the ones assigned to vhost0 and vhost1

ovs-appctl dpif-netdev/pmd-rxq-show
pmd thread numa_id 1 core_id 9:
  isolated : true
  port: ens1f0            queue-id:  0  pmd usage:  0 %
  port: testdpdk0         queue-id:  0  pmd usage:  0 %
pmd thread numa_id 1 core_id 10:
  isolated : true
  port: vhost0            queue-id:  0  pmd usage:  0 %
pmd thread numa_id 1 core_id 25:
  isolated : true
  port: ens1f1            queue-id:  0  pmd usage:  0 %
  port: testdpdk1         queue-id:  0  pmd usage:  0 %
pmd thread numa_id 1 core_id 26:
  isolated : true
  port: vhost1            queue-id:  0  pmd usage:  0 %

pidstat confirms only NUMA1 has PMD threads:

 pidstat -t -p `pidof ovs-vswitchd` 1 | grep -E pmd\|%CPU
   UID      TGID       TID    %usr %system  %guest    %CPU   CPU  Command
   995         -     20018  100.00    0.00    0.00  100.00     9  |__pmd6
   995         -     20019   99.00    0.00    0.00   99.00    25  |__pmd7
   995         -      6181  100.00    0.00    0.00  100.00    26  |__pmd11
   995         -      6183  100.00    0.00    0.00  100.00    10  |__pmd12
   UID      TGID       TID    %usr %system  %guest    %CPU   CPU  Command
   995         -     20018  100.00    1.00    0.00  100.00     9  |__pmd6
   995         -     20019  100.00    1.00    0.00  100.00    25  |__pmd7
   995         -      6181  100.00    0.00    0.00  100.00    26  |__pmd11
   995         -      6183  100.00    1.00    0.00  100.00    10  |__pmd12


Is this wrong assignment in the logs just a cosmetic issue or could be the root cause behind the dropped packets?

----------

Software version:

(Tried same setup in 2 KVM hosts running CentOS and Ubuntu got exactly the same behaviour in both )

HOST OS: CentOS Linux release 7.6.1810 (Core)
qemu version: 2.10.0(qemu-kvm-ev-2.10.0-21.el7_5.7.1)
libvirt version: 4.5.0, package: 10.el7_6.9
ovs-vswitchd (Open vSwitch) 2.11.1
DPDK 18.11.0

HOST OS:  Ubuntu 18.04.2 LTS (Bionic Beaver)
QEMU emulator version 2.11.1(Debian 1:2.11+dfsg-1ubuntu7.14)
libvirtd (libvirt) 4.0.0
ovs-vswitchd (Open vSwitch) 2.9.2
DPDK 17.11.2


----------

OVS configuration:

ovs-vsctl get Open_vSwitch . other_config:
{dpdk-init="true",  dpdk-socket-mem="0,2048", pmd-cpu-mask="0x06000600"}


ovs-vsctl list interface vhost0
_uuid               : f1a1e95f-2bb9-4760-abfa-47b5786a90f6
admin_state         : up
bfd                 : {}
bfd_status          : {}
cfm_fault           : []
cfm_fault_status    : []
cfm_flap_count      : []
cfm_health          : []
cfm_mpid            : []
cfm_remote_mpids    : []
cfm_remote_opstate  : []
duplex              : []
error               : []
external_ids        : {}
ifindex             : 6900984
ingress_policing_burst: 0
ingress_policing_rate: 0
lacp_current        : []
link_resets         : 0
link_speed          : []
link_state          : up
lldp                : {}
mac                 : []
mac_in_use          : "00:00:00:00:00:00"
mtu                 : 1500
mtu_request         : []
name                : "vhost0"
ofport              : 3
ofport_request      : []
options             : {n_rxq="1,tag=951", vhost-server-path="/var/run/openvswitch/vhost0"}
other_config        : {pmd-rxq-affinity="0:10"}
statistics          : {"rx_1024_to_1522_packets"=0, "rx_128_to_255_packets"=0, "rx_1523_to_max_packets"=0, "rx_1_to_64_packets"=0, "rx_256_to_511_packets"=0, "rx_512_to_1023_packets"=0, "rx_65_to_127_packets"=0, rx_bytes=0, rx_dropped=0, rx_errors=0, rx_packets=0, tx_bytes=0, tx_dropped=25663, tx_packets=0}
status              : {features="0x00000000780067c2", mode=client, num_of_vrings="2", numa="1", socket="/var/run/openvswitch/vhost0", status=connected, "vring_0_size"="256", "vring_1_size"="256"}
type                : dpdkvhostuserclient



ovs-vsctl list interface vhost1
_uuid               : 892f29fa-7da3-4a0c-ba96-330a47497588
admin_state         : up
bfd                 : {}
bfd_status          : {}
cfm_fault           : []
cfm_fault_status    : []
cfm_flap_count      : []
cfm_health          : []
cfm_mpid            : []
cfm_remote_mpids    : []
cfm_remote_opstate  : []
duplex              : []
error               : []
external_ids        : {}
ifindex             : 11558195
ingress_policing_burst: 0
ingress_policing_rate: 0
lacp_current        : []
link_resets         : 0
link_speed          : []
link_state          : up
lldp                : {}
mac                 : []
mac_in_use          : "00:00:00:00:00:00"
mtu                 : 1500
mtu_request         : []
name                : "vhost1"
ofport              : 4
ofport_request      : []
options             : {n_rxq="1,tag=952", vhost-server-path="/var/run/openvswitch/vhost1"}
other_config        : {pmd-rxq-affinity="0:26"}
statistics          : {"rx_1024_to_1522_packets"=0, "rx_128_to_255_packets"=0, "rx_1523_to_max_packets"=0, "rx_1_to_64_packets"=0, "rx_256_to_511_packets"=0, "rx_512_to_1023_packets"=0, "rx_65_to_127_packets"=0, rx_bytes=0, rx_dropped=0, rx_errors=0, rx_packets=0, tx_bytes=0, tx_dropped=25663, tx_packets=0}
status              : {features="0x00000000780067c2", mode=client, num_of_vrings="2", numa="1", socket="/var/run/openvswitch/vhost1", status=connected, "vring_0_size"="256", "vring_1_size"="256"}
type                : dpdkvhostuserclient

​
-----------


Guest libvirt interfaces:

    <interface type='vhostuser'>
      <mac address='0c:c4:7a:ea:4b:b2'/>
      <source type='unix' path='/var/run/openvswitch/vhost0' mode='server'/>
      <target dev='vhost0'/>
      <model type='virtio'/>
      <driver name='vhost'>
        <host mrg_rxbuf='off'/>
      </driver>
      <address type='pci' domain='0x0000' bus='0x00' slot='0x05' function='0x0'/>
    </interface>
    <interface type='vhostuser'>
      <mac address='0c:c4:7a:ea:4b:b3'/>
      <source type='unix' path='/var/run/openvswitch/vhost1' mode='server'/>
      <target dev='vhost1'/>
      <model type='virtio'/>
      <driver name='vhost'>
        <host mrg_rxbuf='off'/>
      </driver>
      <address type='pci' domain='0x0000' bus='0x00' slot='0x06' function='0x0'/>
    </interface>
--------------------------------------------------------------



Thanks,


Joan

-------------- next part --------------
An HTML attachment was scrubbed...
URL: <http://mail.openvswitch.org/pipermail/ovs-discuss/attachments/20190528/2d42a03c/attachment-0001.html>


More information about the discuss mailing list