[ovs-discuss] OVS DPDK: Failed to create memory pool for netdev

Flavio Leitner fbl at sysclose.org
Tue Nov 5 13:07:58 UTC 2019


On Mon, 4 Nov 2019 19:12:36 +0000
"Tobias Hofmann (tohofman)" <tohofman at cisco.com> wrote:

> Hi Flavio,
> 
> thanks for reaching out.
> 
> The DPDK options used in OvS are:
> 
> other_config:pmd-cpu-mask=0x202
> other_config:dpdk-socket-mem=1024
> other_config:dpdk-init=true
> 
> 
> For the dpdk port, we set:
> 
> type=dpdk
> options:dpdk-devargs=0000:08:0b.2
> external_ids:unused-drv=i40evf 
> mtu_request=9216

Looks good to me, though the CPU has changed comparing to the log:
2019-11-02T14:51:26.940Z|00010|dpdk|INFO|EAL ARGS: ovs-vswitchd
--socket-mem 1024 -c 0x00000001

What I see from the logs is that OvS is trying to add a port, but the
port is not ready yet, so it continues with other things which
also consumes memory. Unfortunately by the time that the i40 port is
ready then there is no memory.

When you restart, the i40 is ready and the memory can be allocated.
However, the ring allocation fails due to lack of memory:

2019-11-02T14:51:27.808Z|00136|dpdk|ERR|RING: Cannot reserve memory
2019-11-02T14:51:27.974Z|00137|dpdk|ERR|RING: Cannot reserve memory

If you reduce the MTU, then the minimum amount of memory required for
the DPDK port reduces drastically, which explains why it works.

Also increasing the total memory to 2G helps because then the minimum
amount for 9216 MTU and the ring seems to be sufficient.

The ring seems to be related to pdump, is that the case?
I don't known of the top of my head.

In summary, looks like 1G is not enough for large MTU and pdump.
HTH,
fbl

> 
> 
> Please let me know if this is what you asked for.
> 
> Thanks
> Tobias
> 	
> On 04.11.19, 15:50, "Flavio Leitner" <fbl at sysclose.org> wrote:
> 
>     
>     It would be nice if you share the DPDK options used in OvS.
>     
>     On Sat, 2 Nov 2019 15:43:18 +0000
>     "Tobias Hofmann \(tohofman\) via discuss"
> <ovs-discuss at openvswitch.org> wrote:
>     
>     > Hello community,
>     > 
>     > My team and I observe a strange behavior on our system with the
>     > creation of dpdk ports in OVS. We have a CentOS 7 system with
>     > OpenvSwitch and only one single port of type ‘dpdk’ attached to
>     > a bridge. The MTU size of the DPDK port is 9216 and the reserved
>     > HugePages for OVS are 512 x 2MB-HugePages, e.g. 1GB of total
>     > HugePage memory.
>     > 
>     > Setting everything up works fine, however after I reboot my
>     > box, the dpdk port is in error state and I can observe this
>     > line in the logs (full logs attached to the mail):
>     > 2019-11-02T14:46:16.914Z|00437|netdev_dpdk|ERR|Failed to create
>     > memory pool for netdev dpdk-p0, with MTU 9216 on socket 0:
>     > Invalid argument
>     > 2019-11-02T14:46:16.914Z|00438|dpif_netdev|ERR|Failed to set
>     > interface dpdk-p0 new configuration
>     > 
>     > I figured out that by restarting the openvswitch process, the
>     > issue with the port is resolved and it is back in a working
>     > state. However, as soon as I reboot the system a second time,
>     > the port comes up in error state again. Now, we have also
>     > observed a couple of other workarounds that I can’t really
>     > explain why they help:
>     > 
>     >   *   When there is also a VM deployed on the system that is
>     > using ports of type ‘dpdkvhostuserclient’, we never see any
>     > issues like that. (MTU size of the VM ports is 9216 by the way)
>     >   *   When we increase the HugePage memory for OVS to 2GB, we
>     > also don’t see any issues.
>     >   *   Lowering the MTU size of the ‘dpdk’ type port to 1500 also
>     > helps to prevent this issue.
>     > 
>     > Can anyone explain this?
>     > 
>     > We’re using the following versions:
>     > Openvswitch: 2.9.3
>     > DPDK: 17.11.5
>     > 
>     > Appreciate any help!
>     > Tobias  
>     
>     
> 



More information about the discuss mailing list