[ovs-discuss] Adding DPDK VFIO NIC to OVS DPDK bridge produces error attaching device

Wittling, Mark (CCI-Atlanta) Mark.Wittling at cox.com
Wed May 6 14:41:08 UTC 2020


Greetings,
I just put a post out on DPDK forum, but as that post has not garnered much attention, and the issue actually might be an OVS issue, I will post here in discussions to see if anyone can help me here as well. Any help would be appreciated, as I appear to be stuck at this particular point.

The issue, is that I cannot get OpenVSwitch to add my DPDK-bound VFIO port (e1000 DPDK-compatible NIC) to the bridge without an error.

I am supplying all of the info I know the community would typically ask me for, before I show the error at the end.



Hardware: Model: Dell Precision T-1700



CPU:

# lscpu

Architecture:         x86_64

CPU op-mode(s):       32-bit, 64-bit

Byte Order:           Little Endian

CPU(s):               4

On-line CPU(s) list:  0-3

Thread(s) per core:   1

Core(s) per socket:   4

Socket(s):            1

NUMA node(s):         1

Vendor ID:            GenuineIntel

CPU family:           6

Model:                60

Model name:           Intel(R) Core(TM) i5-4690 CPU @ 3.50GHz

Stepping:             3

CPU MHz:              1183.471

CPU max MHz:          3900.0000

CPU min MHz:          800.0000

BogoMIPS:             6983.91

Virtualization:       VT-x

L1d cache:            32K

L1i cache:            32K

L2 cache:             256K

L3 cache:             6144K

NUMA node0 CPU(s):    0-3

Flags:                fpu vme de pse tsc msr pae mce cx8 apic sep mtrr pge mca cmov pat pse36 clflush dts acpi mmx fxsr sse sse2 ss ht tm pbe syscall nx pdpe1gb rdtscp lm constant_tsc arch_perfmon pebs bts rep_good nopl xtopology nonstop_tsc aperfmperf eagerfpu pni pclmulqdq dtes64 monitor ds_cpl vmx smx est tm2 ssse3 sdbg fma cx16 xtpr pdcm pcid sse4_1 sse4_2 x2apic movbe popcnt tsc_deadline_timer aes xsave avx f16c rdrand lahf_lm abm epb invpcid_single ssbd ibrs ibpb stibp tpr_shadow vnmi flexpriority ept vpid fsgsbase tsc_adjust bmi1 avx2 smep bmi2 erms invpcid xsaveopt dtherm ida arat pln pts md_clear spec_ctrl intel_stibp flush_l1d



# numactl -H

available: 1 nodes (0)

node 0 cpus: 0 1 2 3

node 0 size: 16019 MB

node 0 free: 7554 MB

node distances:

node  0

 0: 10



NOTE: I only have a single 4 core CPU, but it is Numa-enabled and Numa compatible, giving me one Numa Node with 4 cores.



Memory:

# lsmem --summary

Memory block size:      128M

Total online memory:     16G

Total offline memory:     0B



# cat /proc/cmdline

BOOT_IMAGE=/vmlinuz-3.10.0-1127.el7.x86_64 root=UUID=4102ab69-f71a-4dd0-a14e-8695aa230a0d ro rhgb quiet iommu=pt intel_iommu=on default_hugepagesz=1G hugepagesz=1G hugepages=4 transparent_hugepage=never LANG=en_US.UTF-8



A look at the kernel command line, which has iommu and hugepage directives



# cat /proc/meminfo | grep Huge

AnonHugePages:        0 kB

HugePages_Total:      4

HugePages_Free:       3

HugePages_Rsvd:       0

HugePages_Surp:       0

Hugepagesize:   1048576 kB



A look at how HugePages are allocated



# lsmod | grep vfio

vfio_pci              41412 0

vfio_iommu_type1      22440 0

vfio                  32657 3 vfio_iommu_type1,vfio_pci

irqbypass             13503 2 kvm,vfio_pci



Kernel Modules successfully loaded.



# /usr/share/dpdk/usertools/dpdk-devbind.py<http://dpdk-devbind.py/> --status



Network devices using DPDK-compatible driver

============================================

0000:01:00.0 '82571EB/82571GB Gigabit Ethernet Controller D0/D1 (copper applications) 105e' drv=vfio-pci unused=e1000e

0000:01:00.1 '82571EB/82571GB Gigabit Ethernet Controller D0/D1 (copper applications) 105e' drv=vfio-pci unused=e1000e



Our DPDK-compatible NICs (p2p1 and p2p2 when not overridden with vfio) have been properly bound to DPDK.



Network devices using kernel driver

===================================

0000:00:19.0 'Ethernet Connection I217-LM 153a' if=em1 drv=e1000e unused=vfio-pci

0000:03:00.0 '82571EB/82571GB Gigabit Ethernet Controller D0/D1 (copper applications) 105e' if=p1p1 drv=e1000e unused=vfio-pci

0000:03:00.1 '82571EB/82571GB Gigabit Ethernet Controller D0/D1 (copper applications) 105e' if=p1p2 drv=e1000e unused=vfio-pci



These are the devices not bound to DPDK, em1, p1p1 and p1p2.



# ovs-vsctl list Open_vSwitch

_uuid              : ace95756-927f-4ceb-be27-76d0d5374461

bridges            : [2382d0c5-8955-4a27-b001-5fc606aabab8, 518c4675-a34a-43de-aedf-0eb9b44a7195, 77d5b674-2e0f-4311-9529-c7d1e2f1c344, 8ee6f680-63b1-42f0-b0ed-48807cbe49af, d0d3adca-e359-4b12-a775-b75647b98e47]

cur_cfg            : 369

datapath_types     : [netdev, system]

db_version         : "7.16.1"

dpdk_initialized   : true

dpdk_version       : "DPDK 18.11.0"

external_ids       : {hostname=maschinen, ovn-encap-ip="192.168.20.201", ovn-encap-type="geneve,vxlan", ovn-remote="tcp:192.168.20.200:6642", rundir="/var/run/openvswitch", system-id="35b95ef5-fd71-491f-8623-5ccbbc1eca6b"}

iface_types        : [dpdk, dpdkr, dpdkvhostuser, dpdkvhostuserclient, erspan, geneve, gre, internal, "ip6erspan", "ip6gre", lisp, patch, stt, system, tap, vxlan]

manager_options    : [8e30d191-ed8c-4f26-bc13-9f1087c0db25]

next_cfg           : 369

other_config       : {dpdk-init="true", dpdk-socket-limit="1024", dpdk-socket-mem="1024", pmd-cpu-mask="0x8"}

ovs_version        : "2.11.0"

ssl                : []

statistics         : {}

system_type        : centos

system_version     : "7"





OpenVSwitch initialized with dpdk (init=true), and our socket parms (1 Hugepage), and I even set a mdp-cpu-mask to 0x8 to ensure that I am using CPU core of 3 in the array [0,1,2,3].



# ovs-vsctl add-br br-testdpdk -- set bridge br-testdpdk datapath_type=netdev



We add a bridge in OpenVSwitch. No issue with this.



# ovs-vsctl add-port br-testdpdk p2p1 -- set Interface p2p1 type=dpdk options:dpdk-devargs=0000:01:00.0

ovs-vsctl: Error detected while setting up 'p2p1': Error attaching device '0000:01:00.0' to DPDK. See ovs-vswitchd log for details.

ovs-vsctl: The default log directory is "/var/log/openvswitch".



Adding the PCI 0000:01:00.0 (p2p1) to the bridge fails.

Also tried adding 0000:01:00.1 (p2p2) which also fails identical way.



# cat ovs-vswitchd.log

2020-05-04T21:12:11.071Z|00291|dpdk|ERR|EAL: Driver cannot attach the device (0000:01:00.0)

2020-05-04T21:12:11.071Z|00292|dpdk|ERR|EAL: Failed to attach device on primary process

2020-05-04T21:12:11.071Z|00293|netdev_dpdk|WARN|Error attaching device '0000:01:00.0' to DPDK

2020-05-04T21:12:11.071Z|00294|netdev|WARN|p2p1: could not set configuration (Invalid argument)

2020-05-04T21:12:11.071Z|00295|dpdk|ERR|Invalid port_id=32



Looking on the web for an answer to this, has not provided me any results. Everything up to this point looked just fine.

NOTE: I did a full yum update on the box, uninstalled openvswitch, and re-installed openvswitch.

NOTE: I did *not* do a custom build of DPDK or OpenVSwitch. I used yum to install dpdk, dpdk-tools, and openvswitch.


Regards,
Mark Wittling
NFV Cloud Operations
Cox Communications Inc
CTECH A08-150D
6305-A Peachtree Dunwoody Road, Atlanta GA 30328
1-770-849-9696

-------------- next part --------------
An HTML attachment was scrubbed...
URL: <http://mail.openvswitch.org/pipermail/ovs-discuss/attachments/20200506/157b8bac/attachment-0001.html>


More information about the discuss mailing list