[ovs-dev] [PATCH V2 1/1] dpdk: Add debug appctl to get malloc statistics.

Eelco Chaudron echaudro at redhat.com
Tue May 11 13:25:43 UTC 2021



On 11 May 2021, at 15:22, Eli Britstein wrote:

> On 5/11/2021 4:13 PM, Eelco Chaudron wrote:
>> External email: Use caution opening links or attachments
>>
>>
>> On 11 May 2021, at 14:53, Eli Britstein wrote:
>>
>>> On 5/11/2021 2:29 PM, Eelco Chaudron wrote:
>>>> External email: Use caution opening links or attachments
>>>>
>>>>
>>>> On 2 May 2021, at 9:13, Eli Britstein wrote:
>>>>
>>>>> New appctl 'dpdk/get-malloc-stats' implemented to get result of
>>>>> 'rte_malloc_dump_stats()' function.
>>>>>
>>>>> Could be used for debugging.
>>>> This patch looks good, however, my suggestion on the first patchset was to include the rte_malloc_get_socket_stats() output for all sockets in the system with this command. Or was there a specific reason to abandon this?
>>> Looking again into it, following by your comments, I saw the exact same information with the 2 commands, so I abandoned the less elaborated one.
>> The rte_malloc_get_socket_stats() shows the information per numa node, which might be of interest in a multi-node system. Copied in Kevin/David, do you have any experience needing this information per node? I guess it will not hurt putting it in.
>
> I do run on a multi-node system.
>
> Example run with v1, those are the outputs:
>
> Socket = 0
> heap_totalsz_bytes = 1073741824
> heap_freesz_bytes = 1071313728
> greatest_free_size = 1071313728
> free_count = 1
> alloc_count = 227
> heap_allocsz_bytes = 2428096
> Socket = 1
> heap_totalsz_bytes = 1073741824
> heap_freesz_bytes = 277099968
> greatest_free_size = 276979584
> free_count = 34
> alloc_count = 90
> heap_allocsz_bytes = 796641856
>
> Heap id:0
>     Heap name:socket_0
>     Heap_size:1073741824,
>     Free_size:1071313728,
>     Alloc_size:2428096,
>     Greatest_free_size:1071313728,
>     Alloc_count:227,
>     Free_count:1,
> Heap id:1
>     Heap name:socket_1
>     Heap_size:1073741824,
>     Free_size:277099968,
>     Alloc_size:796641856,
>     Greatest_free_size:276979584,
>     Alloc_count:90,
>     Free_count:34,
> Heap id:2
>     Heap name:
>     Heap_size:0,
> ... (all sizes are 0), until "Heap id:31".
>
>
> So the latter (e.g. rte_malloc_dump_stats()) gives the same info, and more, than rte_malloc_get_socket_stats().
>
> This is why I abandoned the socket stats, as we get them anyway, with less code in OVS side.


Nice! Thanks for sharing the output, as I only had a single node system! This clears it up..

Acked-by: Eelco Chaudron <echaudro at redhat.com>

>>
>>>>> Signed-off-by: Eli Britstein <elibr at nvidia.com>
>>>>> Reviewed-by: Salem Sol <salems at nvidia.com>
>>>>> ---
>>>>>    NEWS                 |  2 ++
>>>>>    lib/dpdk-unixctl.man |  2 ++
>>>>>    lib/dpdk.c           | 30 ++++++++++++++++++++++++++++++
>>>>>    3 files changed, 34 insertions(+)
>>>>>
>>>>> diff --git a/NEWS b/NEWS
>>>>> index 95cf922aa..705baa90d 100644
>>>>> --- a/NEWS
>>>>> +++ b/NEWS
>>>>> @@ -9,6 +9,8 @@ Post-v2.15.0
>>>>>         * New option '--no-record-hostname' to disable hostname configuration
>>>>>           in ovsdb on startup.
>>>>>         * New command 'record-hostname-if-not-set' to update hostname in ovsdb.
>>>>> +   - DPDK:
>>>>> +     * New debug appctl command 'dpdk/get-malloc-stats'.
>>>>>
>>>>>
>>>>>    v2.15.0 - 15 Feb 2021
>>>>> diff --git a/lib/dpdk-unixctl.man b/lib/dpdk-unixctl.man
>>>>> index 2d6d576f2..a0d1fa2ea 100644
>>>>> --- a/lib/dpdk-unixctl.man
>>>>> +++ b/lib/dpdk-unixctl.man
>>>>> @@ -10,5 +10,7 @@ list of words separated by spaces: a word can be either a logging \fBlevel\fR
>>>>>    \fBnotice\fR, \fBinfo\fR or \fBdebug\fR) or a \fBpattern\fR matching DPDK
>>>>>    components (see \fBdpdk/log-list\fR command on \fBovs\-appctl\fR(8)) separated
>>>>>    by a colon from the logging \fBlevel\fR to apply.
>>>>> +.IP "\fBdpdk/get-malloc-stats\fR"
>>>>> +Prints the heap information statistics about DPDK malloc.
>>>>>    .RE
>>>>>    .
>>>>> diff --git a/lib/dpdk.c b/lib/dpdk.c
>>>>> index 319540394..a22de66eb 100644
>>>>> --- a/lib/dpdk.c
>>>>> +++ b/lib/dpdk.c
>>>>> @@ -25,6 +25,7 @@
>>>>>    #include <rte_cpuflags.h>
>>>>>    #include <rte_errno.h>
>>>>>    #include <rte_log.h>
>>>>> +#include <rte_malloc.h>
>>>>>    #include <rte_memzone.h>
>>>>>    #include <rte_version.h>
>>>>>
>>>>> @@ -356,6 +357,33 @@ dpdk_unixctl_log_set(struct unixctl_conn *conn, int argc, const char *argv[],
>>>>>        unixctl_command_reply(conn, NULL);
>>>>>    }
>>>>>
>>>>> +static void
>>>>> +dpdk_unixctl_get_malloc_stats(struct unixctl_conn *conn,
>>>>> +                              int argc OVS_UNUSED,
>>>>> +                              const char *argv[] OVS_UNUSED,
>>>>> +                              void *aux OVS_UNUSED)
>>>>> +{
>>>>> +    char *response = NULL;
>>>>> +    FILE *stream;
>>>>> +    size_t size;
>>>>> +
>>>>> +    stream = open_memstream(&response, &size);
>>>>> +    if (!stream) {
>>>>> +        response = xasprintf("Unable to open memstream: %s.",
>>>>> +                             ovs_strerror(errno));
>>>>> +        unixctl_command_reply_error(conn, response);
>>>>> +        goto out;
>>>>> +    }
>>>>> +
>>>>> +    rte_malloc_dump_stats(stream, NULL);
>>>>> +
>>>>> +    fclose(stream);
>>>>> +
>>>>> +    unixctl_command_reply(conn, response);
>>>>> +out:
>>>>> +    free(response);
>>>>> +}
>>>>> +
>>>>>    static bool
>>>>>    dpdk_init__(const struct smap *ovs_other_config)
>>>>>    {
>>>>> @@ -525,6 +553,8 @@ dpdk_init__(const struct smap *ovs_other_config)
>>>>>                                 dpdk_unixctl_mem_stream, rte_log_dump);
>>>>>        unixctl_command_register("dpdk/log-set", "{level | pattern:level}", 0,
>>>>>                                 INT_MAX, dpdk_unixctl_log_set, NULL);
>>>>> +    unixctl_command_register("dpdk/get-malloc-stats", "", 0, 0,
>>>>> +                             dpdk_unixctl_get_malloc_stats, NULL);
>>>>>
>>>>>        /* We are called from the main thread here */
>>>>>        RTE_PER_LCORE(_lcore_id) = NON_PMD_CORE_ID;
>>>>> --
>>>>> 2.28.0.2311.g225365fb51



More information about the dev mailing list