[ovs-dev] [ovs-dev, 4/6] dpif-netdev: Flush the packets in intermediate queue.
Bodireddy, Bhanuprakash
bhanuprakash.bodireddy at intel.com
Thu Jun 29 10:26:14 UTC 2017
>I found another issue while testing w.r.t vhostuser ports.
>
>When non-pmd thread tries to send packets to vHostUser port below is the
>call patch
> dp_execute_cb()
> netdev_dpdk_vhost_send() [(may_steal == true) and (Pkt src type ==
>DPBUF_MALLOC)]
> dpdk_do_tx_copy()
> __netdev_dpdk_vhost_send() [ Pkts are queues till the cnt
>reaches 32]
>
>The packets aren't flushed and the ping can fail in this cases. To verify if it
>works, I invoked 'vhost_tx_burst()' in dpdk_do_tx_copy().
>But you mentioned this may pose a problem w.r.t flood traffic having higher
>priority in other mail. What is the best place to flush the non-PMD thread
>queues?
The better solution may be is to flush in dpif_netdev_run() after dp_netdev_process_rxq_port() for non pmd threads.
----------------------dpif_netdev_run()--------------------------------------
for (i = 0; i < port->n_rxq; i++) {
dp_netdev_process_rxq_port(non_pmd, port->rxqs[i].rx,
port->port_no);
dp_netdev_flush_txq_ports(non_pmd);
}
Bhanuprakash.
>
>>
>>>At least, you have to flush non-PMD threads too.
>>
>>In case of non PMD threads we don’t have to flush as the packets aren't
>>queued and bursted instantly. The call path on the transmit side is:
>>
>>Vswitchd thread:
>>
>> dp_execute_cb()
>> netdev_send()
>> netdev_dpdk_send__()
>> dpdk_do_tx_copy()
>> netdev_dpdk_eth_tx_burst(). [ Burst packets immediately]
>>
>>- Bhanuprakash.
>>
>>>
>>>On 18.06.2017 22:56, Bhanuprakash Bodireddy wrote:
>>>> Under low rate traffic conditions, there can be 2 issues.
>>>> (1) Packets potentially can get stuck in the intermediate queue.
>>>> (2) Latency of the packets can increase significantly due to
>>>> buffering in intermediate queue.
>>>>
>>>> This commit handles the (1) issue by flushing the tx port queues
>>>> from PMD processing loop. Also this commit addresses issue (2) by
>>>> flushing the tx queues after every rxq port processing. This reduces
>>>> the latency with out impacting the forwarding throughput.
>>>>
>>>> MASTER
>>>> --------
>>>> Pkt size min(ns) avg(ns) max(ns)
>>>> 512 4,631 5,022 309,914
>>>> 1024 5,545 5,749 104,294
>>>> 1280 5,978 6,159 45,306
>>>> 1518 6,419 6,774 946,850
>>>>
>>>> MASTER + COMMIT
>>>> -----------------
>>>> Pkt size min(ns) avg(ns) max(ns)
>>>> 512 4,711 5,064 182,477
>>>> 1024 5,601 5,888 701,654
>>>> 1280 6,018 6,491 533,037
>>>> 1518 6,467 6,734 312,471
>>>>
>>>> PMDs can be teared down and spawned at runtime and so the rxq and
>>>> txq mapping of the PMD threads can change. In few cases packets can
>>>> get stuck in the queue due to reconfiguration and this commit helps
>>>> flush the queues.
>>>>
>>>> Suggested-by: Eelco Chaudron <echaudro at redhat.com>
>>>> Reported-at:
>>>> https://mail.openvswitch.org/pipermail/ovs-dev/2017-April/331039.htm
>>>> l
>>>> Signed-off-by: Bhanuprakash Bodireddy
>>>> <bhanuprakash.bodireddy at intel.com>
>>>> Signed-off-by: Antonio Fischetti <antonio.fischetti at intel.com>
>>>> Co-authored-by: Antonio Fischetti <antonio.fischetti at intel.com>
>>>> Signed-off-by: Markus Magnusson <markus.magnusson at ericsson.com>
>>>> Co-authored-by: Markus Magnusson
><markus.magnusson at ericsson.com>
>>>> Acked-by: Eelco Chaudron <echaudro at redhat.com>
>>>> ---
>>>> lib/dpif-netdev.c | 5 +++++
>>>> 1 file changed, 5 insertions(+)
>>>>
>>>> diff --git a/lib/dpif-netdev.c b/lib/dpif-netdev.c index
>>>> d59208e..dfd88aa 100644
>>>> --- a/lib/dpif-netdev.c
>>>> +++ b/lib/dpif-netdev.c
>>>> @@ -3761,6 +3761,8 @@ reload:
>>>> for (i = 0; i < poll_cnt; i++) {
>>>> dp_netdev_process_rxq_port(pmd, poll_list[i].rx,
>>>> poll_list[i].port_no);
>>>> +
>>>> + dp_netdev_flush_txq_ports(pmd);
>>>> }
>>>>
>>>> if (lc++ > 1024) {
>>>> @@ -3781,6 +3783,9 @@ reload:
>>>> }
>>>> }
>>>>
>>>> + /* Flush the queues as part of reconfiguration logic. */
>>>> + dp_netdev_flush_txq_ports(pmd);
>>>> +
>>>> poll_cnt = pmd_load_queues_and_ports(pmd, &poll_list);
>>>> exiting = latch_is_set(&pmd->exit_latch);
>>>> /* Signal here to make sure the pmd finishes
>>>>
More information about the dev
mailing list