[ovs-discuss] OVS 2.5 crashes when setting n-dpdk-rxqs to 64 w/ two dpdk ports.

Daniele Di Proietto diproiettod at ovn.org
Wed Sep 28 21:58:26 UTC 2016


2016-09-28 2:57 GMT-07:00 Wojciechowicz, RobertX <
robertx.wojciechowicz at intel.com>:

> On Fri, Sep 16, 2016 at 05:11:11PM +0000, John Phillips wrote:
> > When I try to set other_config:n-dpdk-rxqs to 64 with two intel niantic
> dpdk
> > ports on a single dpdk bridge, the bridge instance will 'crash' - I
> > can't access
> > it's flows through ovs-ofctl commands. I am running an OVS from the 2.5
> > branch,
> > specifically commit ID b3e263929a7a00c96a1329f93f1b8fce58b726e4, DPDK
> 16.04.
> >
>
> I investigated this issue using these software versions:
> OVS: master (commit: 05bb914854831f58c61343570d8b10d6059646b8)
> DPDK: v16.07
>
> It seems that this problem is related to the static memory pool allocation
> with hardcoded size.
>
> In OVS in netdev_dpdk.c: dpdk_mp_get function there is following comment:
> """
> /* XXX: this is a really rough method of provisioning memory.
> It's impossible to determine what the exact memory requirements are when
> the number of ports and rxqs that utilize a particular mempool can change
> dynamically at runtime. For the moment, use this rough heurisitic.
> */
> """
> and it seems that for this case there is just not enough memory allocated.
>

I agree with the analysis.  That MAX_NB_MBUF and MIN_NB_BUF magic numbers
do not make a lot of sense.


>
> By default OVS allocates memory pool for 128 queues (default queue size:
> 2048)
> as follows:
> #define MAX_NB_MBUF (4096 * 64)
>
> If memory pool is exhausted then following error appears:
>
> """
> PMD: ixgbe_alloc_rx_queue_mbufs(): RX mbuf alloc failed queue_id=62
> PMD: ixgbe_dev_rx_queue_start(): Could not alloc mbuf for queue:62
> PMD: ixgbe_dev_start(): Unable to start rxtx queues
> PMD: ixgbe_dev_start(): failure in ixgbe_dev_start(): -1
> """
>
> So my proposal to solve this problem is making the value MAX_NB_MBUF
> (maximum memory pool size) configurable in OVS DB.
> In my tests increasing this value to (4096 * 66) allowed for creation two
> ports
> with 64 rx queues each.
>
> Please let me know if such configuration might be useful for you,
> so it makes sense to start working on the OVS patch.
>

I don't like exposing this to the user.

I think there's a better way of handling this now that it's possible to
delete mempools in DPDK.

Each port can have its own mempool.  We can estimate "better" how many
mbufs are required for each port:

n_rxq * NIC_PORT_RX_Q_SIZE /* Packets required to fill the device rxqs */
+ n_txq * NIC_PORT_RX_Q_SIZE /* This is an estimation of the packets that
could be stuck on other ports txqs */
+ MIN(RTE_MAX_LCORE, n_rxq) * NETDEV_MAX_BURST /* In fight packets in the
pmd threads*/

The ideal solution would be for the mempool to be resizable, but I don't
know how hard it is to change that in DPDK.

Thoughts?

Daniele


>
> Br,
> Robert
>
-------------- next part --------------
An HTML attachment was scrubbed...
URL: <http://openvswitch.org/pipermail/ovs-discuss/attachments/20160928/52b6e7bf/attachment-0002.html>


More information about the discuss mailing list