[ovs-dev] [PATCH v2] Documentation: Cleanup PMD information.

Kevin Traynor ktraynor at redhat.com
Thu Aug 26 16:28:56 UTC 2021


On 24/08/2021 12:40, David Marchand wrote:
> On Tue, Aug 10, 2021 at 2:10 PM Kevin Traynor <ktraynor at redhat.com> wrote:
>>
>> The 'Port/Rx Queue Assigment to PMD Threads' section has
>> expanded over time and now includes info about stats/commands,
>> manual pinning and different options for OVS assigning Rxqs to
>> PMDs.
>>
>> Split them into different sections with sub-headings and move
>> the two similar paragraphs about stats together.
>>
>> Rename 'Automatic assignment of Port/Rx Queue to PMD Threads'
>> section to 'PMD Automatic Load Balance'.
>>
>> A few other minor cleanups as I was reading.
>>
>> Signed-off-by: Kevin Traynor <ktraynor at redhat.com>
>> Acked-by: Adrian Moreno <amorenoz at redhat.com>
>> ---
>>
>> v2:
>> - a couple of small fixes as per Adrian's comments
>> - remove duplicate PMD ALB conditions paragraph
>> ---
>>  Documentation/topics/dpdk/pmd.rst | 104 ++++++++++++++++--------------
>>  1 file changed, 54 insertions(+), 50 deletions(-)
>>
>> diff --git a/Documentation/topics/dpdk/pmd.rst b/Documentation/topics/dpdk/pmd.rst
>> index 95fa7af12..b0e2419c2 100644
>> --- a/Documentation/topics/dpdk/pmd.rst
>> +++ b/Documentation/topics/dpdk/pmd.rst
>> @@ -75,8 +75,47 @@ for enabling things like multiqueue for :ref:`physical <dpdk-phy-multiqueue>`
>>  and :ref:`vhost-user <dpdk-vhost-user>` interfaces.
>>
>> -To show port/Rx queue assignment::
>> +Rx queues will be assigned to PMD threads by OVS, or they can be manually
>> +pinned to PMD threads by the user.
>> +
>> +To see the port/Rx queue assignment and current measured usage history of PMD
>> +core cycles for each Rx queue::
>>
>>      $ ovs-appctl dpif-netdev/pmd-rxq-show
>>
>> +.. note::
>> +
>> +   A history of one minute is recorded and shown for each Rx queue to allow for
>> +   traffic pattern spikes. Any changes in the Rx queue's PMD core cycles usage,
>> +   due to traffic pattern or reconfig changes, will take one minute to be fully
>> +   reflected in the stats.
>> +
> 
> XXX see below
> 
>> +.. versionchanged:: 2.16.0
>> +
>> +   A ``overhead`` statistics is shown per PMD: it represents the number of
>> +   cycles inherently consumed by the OVS PMD processing loop.
>> +
>> +Rx queue to PMD assignment takes place whenever there are configuration changes
>> +or can be triggered by using::
>> +
>> +    $ ovs-appctl dpif-netdev/pmd-rxq-rebalance
>> +
>> +.. versionchanged:: 2.6.0
>> +
>> +   The ``pmd-rxq-show`` command was added in OVS 2.6.0.
> 
> It seems unrelated with the pmd-rxq-rebalance command itself.
> I would either move this comment next to the first reference to the
> pmd-rxq-show command (i.e. at XXX, before the overhead stat comment),
> or drop it.
> 

true - I moved it to where you suggested

> 
>> +
>> +.. versionchanged:: 2.9.0
>> +
>> +   Utilization-based allocation of Rx queues to PMDs and the
>> +   ``pmd-rxq-rebalance`` command were added in OVS 2.9.0. Prior to this,
>> +   allocation was round-robin and processing cycles were not taken into
>> +   consideration.
>> +
>> +   In addition, the output of ``pmd-rxq-show`` was modified to include
>> +   Rx queue utilization of the PMD as a percentage. Prior to this, tracking of
>> +   stats was not available.
>> +
>> +
> 
> nit: this double empty line is not consistent with the rest of the
> doc, is there a need for it?
> 

nope, removed it in v3

> 
>> +Port/Rx Queue assignment to PMD threads by manual pinning
>> +~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~
>>  Rx queues may be manually pinned to cores. This will change the default Rx
>>  queue assignment to PMD threads::
>> @@ -117,4 +156,6 @@ If using ``pmd-rxq-assign=group`` PMD threads with *pinned* Rxqs can be
>>     a *non-isolated* PMD, that will remain *non-isolated*.
>>
>> +Automatic Port/Rx Queue assignment to PMD threads
>> +~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~
>>  If ``pmd-rxq-affinity`` is not set for Rx queues, they will be assigned to PMDs
>>  (cores) automatically.
>> @@ -126,7 +167,9 @@ The algorithm used to automatically assign Rxqs to PMDs can be set by::
>>  By default, ``cycles`` assignment is used where the Rxqs will be ordered by
>>  their measured processing cycles, and then be evenly assigned in descending
>> -order to PMDs based on an up/down walk of the PMDs. For example, where there
>> -are five Rx queues and three cores - 3, 7, and 8 - available and the measured
>> -usage of core cycles per Rx queue over the last interval is seen to be:
>> +order to PMDs. The PMD that will be selected for a given Rxq will be the next
>> +one in alternating ascending/descending order based on core id. For example,
>> +where there are five Rx queues and three cores - 3, 7, and 8 - available and
>> +the measured usage of core cycles per Rx queue over the last interval is seen
>> +to be:
>>
>>  - Queue #0: 30%
>> @@ -184,48 +227,11 @@ The Rx queues may be assigned to the cores in the following order::
>>      Core 8: P1Q0 |
>>
>> -To see the current measured usage history of PMD core cycles for each Rx
>> -queue::
>> -
>> -    $ ovs-appctl dpif-netdev/pmd-rxq-show
>> -
>> -.. note::
>> -
>> -   A history of one minute is recorded and shown for each Rx queue to allow for
>> -   traffic pattern spikes. Any changes in the Rx queue's PMD core cycles usage,
>> -   due to traffic pattern or reconfig changes, will take one minute to be fully
>> -   reflected in the stats.
>> -
>> -.. versionchanged:: 2.16.0
>> -
>> -   A ``overhead`` statistics is shown per PMD: it represents the number of
>> -   cycles inherently consumed by the OVS PMD processing loop.
>> -
>> -Rx queue to PMD assignment takes place whenever there are configuration changes
>> -or can be triggered by using::
>> -
>> -    $ ovs-appctl dpif-netdev/pmd-rxq-rebalance
>> -
>> -.. versionchanged:: 2.6.0
>> -
>> -   The ``pmd-rxq-show`` command was added in OVS 2.6.0.
>> -
>> -.. versionchanged:: 2.9.0
>> -
>> -   Utilization-based allocation of Rx queues to PMDs and the
>> -   ``pmd-rxq-rebalance`` command were added in OVS 2.9.0. Prior to this,
>> -   allocation was round-robin and processing cycles were not taken into
>> -   consideration.
>> -
>> -   In addition, the output of ``pmd-rxq-show`` was modified to include
>> -   Rx queue utilization of the PMD as a percentage. Prior to this, tracking of
>> -   stats was not available.
>> -
>> -Automatic assignment of Port/Rx Queue to PMD Threads (experimental)
>> --------------------------------------------------------------------
>> +PMD Automatic Load Balance (experimental)
>> +-----------------------------------------
>>
>>  Cycle or utilization based allocation of Rx queues to PMDs gives efficient
>>  load distribution but it is not adaptive to change in traffic pattern
>> -occurring over the time. This causes uneven load among the PMDs which results
>> -in overall lower throughput.
>> +occurring over the time. This may cause an uneven load among the PMDs which
>> +results in overall lower throughput.
>>
>>  To address this automatic load balancing of PMDs can be set by::
>> @@ -233,10 +239,8 @@ To address this automatic load balancing of PMDs can be set by::
>>      $ ovs-vsctl set open_vswitch . other_config:pmd-auto-lb="true"
>>
>> -If pmd-auto-lb is set to true AND cycle based assignment is enabled then auto
>> -load balancing of PMDs is enabled provided there are 2 or more non-isolated
>> -PMDs and at least one of these PMDs is polling more than one RX queue. So,
>> -following conditions need to be met to have Auto Load balancing enabled:
>> +The following conditions need to be met to have Auto Load balancing
>> +enabled:
>>
>> -1. cycle based assignment of RX queues to PMD is enabled.
>> +1. cycle or group based assignment of RX queues to PMD is enabled.
>>  2. pmd-auto-lb is set to true.
>>  3. There are two or more non-isolated PMDs present.
>> --
>> 2.31.1
>>
> 
> The rest lgtm.
> I would vote for backporting to 2.16, wdyt?
> 
> 

Yeah, it's just a cleanup, but might as well backport to 2.16 as it's
still fresh and people may want to read those sections because of the
new functionality that was added.

Thanks for the review,
Kevin.



More information about the dev mailing list