[ovs-discuss] OVS DPDK: Failed to create memory pool for netdev

Tobias Hofmann (tohofman) tohofman at cisco.com
Tue Nov 5 18:47:09 UTC 2019


Hi Flavio,

thanks for the insights! Unfortunately, I don't know about the pdump and its relation to the ring.

Can you please specify where I can see that the port is not ready yet? Is that these three lines:

2019-11-02T14:14:23.094Z|00070|dpdk|ERR|EAL: Cannot find unplugged device (0000:08:0b.2)
2019-11-02T14:14:23.094Z|00071|netdev_dpdk|WARN|Error attaching device '0000:08:0b.2' to DPDK
2019-11-02T14:14:23.094Z|00072|netdev|WARN|dpdk-p0: could not set configuration (Invalid argument)

As far as I know, the ring allocation failure that you mentioned isn't necessarily a bad thing since it just indicates that DPDK reduces something internally (I can't remember what exactly it was) to support a high MTU with only 1GB of memory.

I'm wondering now if it might help to change the timing of when openvswitch is started after a system reboot to prevent this problem as it only occurs after reboot. Do you think that this approach might fix the problem?

Thanks for your help
Tobias

On 05.11.19, 14:08, "Flavio Leitner" <fbl at sysclose.org> wrote:

    On Mon, 4 Nov 2019 19:12:36 +0000
    "Tobias Hofmann (tohofman)" <tohofman at cisco.com> wrote:
    
    > Hi Flavio,
    > 
    > thanks for reaching out.
    > 
    > The DPDK options used in OvS are:
    > 
    > other_config:pmd-cpu-mask=0x202
    > other_config:dpdk-socket-mem=1024
    > other_config:dpdk-init=true
    > 
    > 
    > For the dpdk port, we set:
    > 
    > type=dpdk
    > options:dpdk-devargs=0000:08:0b.2
    > external_ids:unused-drv=i40evf 
    > mtu_request=9216
    
    Looks good to me, though the CPU has changed comparing to the log:
    2019-11-02T14:51:26.940Z|00010|dpdk|INFO|EAL ARGS: ovs-vswitchd
    --socket-mem 1024 -c 0x00000001
    
    What I see from the logs is that OvS is trying to add a port, but the
    port is not ready yet, so it continues with other things which
    also consumes memory. Unfortunately by the time that the i40 port is
    ready then there is no memory.
    
    When you restart, the i40 is ready and the memory can be allocated.
    However, the ring allocation fails due to lack of memory:
    
    2019-11-02T14:51:27.808Z|00136|dpdk|ERR|RING: Cannot reserve memory
    2019-11-02T14:51:27.974Z|00137|dpdk|ERR|RING: Cannot reserve memory
    
    If you reduce the MTU, then the minimum amount of memory required for
    the DPDK port reduces drastically, which explains why it works.
    
    Also increasing the total memory to 2G helps because then the minimum
    amount for 9216 MTU and the ring seems to be sufficient.
    
    The ring seems to be related to pdump, is that the case?
    I don't known of the top of my head.
    
    In summary, looks like 1G is not enough for large MTU and pdump.
    HTH,
    fbl
    
    > 
    > 
    > Please let me know if this is what you asked for.
    > 
    > Thanks
    > Tobias
    > 	
    > On 04.11.19, 15:50, "Flavio Leitner" <fbl at sysclose.org> wrote:
    > 
    >     
    >     It would be nice if you share the DPDK options used in OvS.
    >     
    >     On Sat, 2 Nov 2019 15:43:18 +0000
    >     "Tobias Hofmann \(tohofman\) via discuss"
    > <ovs-discuss at openvswitch.org> wrote:
    >     
    >     > Hello community,
    >     > 
    >     > My team and I observe a strange behavior on our system with the
    >     > creation of dpdk ports in OVS. We have a CentOS 7 system with
    >     > OpenvSwitch and only one single port of type ‘dpdk’ attached to
    >     > a bridge. The MTU size of the DPDK port is 9216 and the reserved
    >     > HugePages for OVS are 512 x 2MB-HugePages, e.g. 1GB of total
    >     > HugePage memory.
    >     > 
    >     > Setting everything up works fine, however after I reboot my
    >     > box, the dpdk port is in error state and I can observe this
    >     > line in the logs (full logs attached to the mail):
    >     > 2019-11-02T14:46:16.914Z|00437|netdev_dpdk|ERR|Failed to create
    >     > memory pool for netdev dpdk-p0, with MTU 9216 on socket 0:
    >     > Invalid argument
    >     > 2019-11-02T14:46:16.914Z|00438|dpif_netdev|ERR|Failed to set
    >     > interface dpdk-p0 new configuration
    >     > 
    >     > I figured out that by restarting the openvswitch process, the
    >     > issue with the port is resolved and it is back in a working
    >     > state. However, as soon as I reboot the system a second time,
    >     > the port comes up in error state again. Now, we have also
    >     > observed a couple of other workarounds that I can’t really
    >     > explain why they help:
    >     > 
    >     >   *   When there is also a VM deployed on the system that is
    >     > using ports of type ‘dpdkvhostuserclient’, we never see any
    >     > issues like that. (MTU size of the VM ports is 9216 by the way)
    >     >   *   When we increase the HugePage memory for OVS to 2GB, we
    >     > also don’t see any issues.
    >     >   *   Lowering the MTU size of the ‘dpdk’ type port to 1500 also
    >     > helps to prevent this issue.
    >     > 
    >     > Can anyone explain this?
    >     > 
    >     > We’re using the following versions:
    >     > Openvswitch: 2.9.3
    >     > DPDK: 17.11.5
    >     > 
    >     > Appreciate any help!
    >     > Tobias  
    >     
    >     
    > 
    
    



More information about the discuss mailing list