[ovs-dev] [PATCH v4 2/6] netdev-dpdk: Fix mempool names to reflect socket id.

Loftus, Ciara ciara.loftus at intel.com
Mon Oct 9 10:33:29 UTC 2017


> 
> Create mempool names by considering also the NUMA socket number.
> So a name reflects on what socket the mempool is allocated on.
> This change is needed for the NUMA-awareness feature.
> 
> CC: Kevin Traynor <ktraynor at redhat.com>
> CC: Aaron Conole <aconole at redhat.com>
> Reported-by: Ciara Loftus <ciara.loftus at intel.com>
> Fixes: d555d9bded5f ("netdev-dpdk: Create separate memory pool for each
> port.")
> Signed-off-by: Antonio Fischetti <antonio.fischetti at intel.com>
> ---
> Mempool names now contains the requested socket id and become like:
> "ovs_4adb057e_1_2030_20512".
> 
> Tested with DPDK 17.05.2 (from dpdk-stable branch).
> NUMA-awareness feature enabled (DPDK/config/common_base).
> 
> Created 1 single dpdkvhostuser port type.
> OvS pmd-cpu-mask=FF00003     # enable cores on both numa nodes
> QEMU core mask = 0xFC000     # cores for qemu on numa node 1 only
> 
>  Before launching the VM:
>  ------------------------
> ovs-appctl dpif-netdev/pmd-rxq-show
> shows core #1 is serving the vhu port.
> 
> pmd thread numa_id 0 core_id 1:
>         isolated : false
>         port: dpdkvhostuser0    queue-id: 0
> 
>  After launching the VM:
>  -----------------------
> the vhu port is now managed by core #27
> pmd thread numa_id 1 core_id 27:
>         isolated : false
>         port: dpdkvhostuser0    queue-id: 0
> 
> and the log shows a new mempool is allocated on NUMA node 1, while
> the previous one is deleted:
> 
> 2017-10-06T14:04:55Z|00105|netdev_dpdk|DBG|Allocated
> "ovs_4adb057e_1_2030_20512" mempool with 20512 mbufs
> 2017-10-06T14:04:55Z|00106|netdev_dpdk|DBG|Releasing
> "ovs_4adb057e_0_2030_20512" mempool
> ---
>  lib/netdev-dpdk.c | 9 +++++----
>  1 file changed, 5 insertions(+), 4 deletions(-)
> 
> diff --git a/lib/netdev-dpdk.c b/lib/netdev-dpdk.c
> index 80a6ff3..0cf47cb 100644
> --- a/lib/netdev-dpdk.c
> +++ b/lib/netdev-dpdk.c
> @@ -499,8 +499,8 @@ dpdk_mp_name(struct dpdk_mp *dmp)
>  {
>      uint32_t h = hash_string(dmp->if_name, 0);
>      char *mp_name = xcalloc(RTE_MEMPOOL_NAMESIZE, sizeof *mp_name);
> -    int ret = snprintf(mp_name, RTE_MEMPOOL_NAMESIZE,
> "ovs_%x_%d_%u",
> -                       h, dmp->mtu, dmp->mp_size);
> +    int ret = snprintf(mp_name, RTE_MEMPOOL_NAMESIZE,
> "ovs_%x_%d_%d_%u",
> +                       h, dmp->socket_id, dmp->mtu, dmp->mp_size);
>      if (ret < 0 || ret >= RTE_MEMPOOL_NAMESIZE) {
>          return NULL;
>      }
> @@ -534,9 +534,10 @@ dpdk_mp_create(struct netdev_dpdk *dev, int mtu,
> bool *mp_exists)
>          char *mp_name = dpdk_mp_name(dmp);
> 
>          VLOG_DBG("Requesting a mempool of %u mbufs for netdev %s "
> -                 "with %d Rx and %d Tx queues.",
> +                 "with %d Rx and %d Tx queues, socket id:%d.",
>                   dmp->mp_size, dev->up.name,
> -                 dev->requested_n_rxq, dev->requested_n_txq);
> +                 dev->requested_n_rxq, dev->requested_n_txq,
> +                 dev->requested_socket_id);
> 
>          dmp->mp = rte_pktmbuf_pool_create(mp_name, dmp->mp_size,
>                                            MP_CACHE_SZ,
> --
> 2.4.11

Thanks for this fix Antonio. I've verified that the vHost User NUMA Awareness feature works again with this change.

Tested-by: Ciara Loftus <ciara.loftus at intel.com>

> 
> _______________________________________________
> dev mailing list
> dev at openvswitch.org
> https://mail.openvswitch.org/mailman/listinfo/ovs-dev


More information about the dev mailing list