[ovs-discuss] OVS-DPDK fails after clearing buffer

Tobias Hofmann -T (tohofman - AAP3 INC at Cisco) tohofman at cisco.com
Wed Mar 20 22:37:43 UTC 2019


Hello,

I want to use Open vSwitch with DPDK enabled. For this purpose, I first allocate 512 HugePages of size 2MB to have a total of 1GB of HugePage memory available for OVS-DPDK. (I don’t set any value for dpdk-socket-mem so the default value of 1GB is taken). Then I set dpdk-init=true. This normally works fine.

However, I have realized that I can’t allocate HugePages from memory that is inside the buff/cache (visible through free -h). To solve this issue, I decided to clear the cache/buffer in Linux before allocating HugePages by running echo 1 > /proc/sys/vm/drop_caches.
After that, allocation of the HugePages still works fine. However, when I then run ovs-vsctl set open_vswitch other_config:dpdk-init=true the process crashes and inside the ovs-vswitchd.log I observe the following:

ovs-vswitchd log output:
2019-03-18T13:32:41.112Z|00015|dpdk|ERR|EAL: Can only reserve 270 pages from 512 requested
Current CONFIG_RTE_MAX_MEMSEG=256 is not enough
Please either increase it or request less amount of memory.
2019-03-18T13:32:41.112Z|00016|dpdk|ERR|EAL: Cannot init memory
2019-03-18T13:32:41.128Z|00002|daemon_unix|ERR|fork child died before signaling startup (killed (Aborted))
2019-03-18T13:32:41.128Z|00003|daemon_unix|EMER|could not detach from foreground session

Tech Details:

  *   Open vSwitch version: 2.9.2
  *   DPDK version: 17.11
  *   System has only a single NUMA node.

This problem is consistently reproducible when having a relatively high amount of memory in the buffer/cache (usually around 5GB) and clearing the buffer afterwards with the command outlined above.
On the Internet, I found some posts saying that this is due to memory fragmentation but normally I’m not even able to allocate HugePages in the first place when my memory is already fragmented. In this scenario however the allocation of HugePages works totally fine after clearing the buffer so why would they be fragmented?

A workaround that I know of is a reboot.

I’d be very grateful about any opinion on that.

Thank you
Tobias
-------------- next part --------------
An HTML attachment was scrubbed...
URL: <http://mail.openvswitch.org/pipermail/ovs-discuss/attachments/20190320/f17b28a1/attachment.html>


More information about the discuss mailing list