[ovs-discuss] ovs-dpdk performance not stable

michael me 1michaelmesguich at gmail.com
Wed Apr 18 13:22:58 UTC 2018


Hi Ian,

In the deployment i do have vhost user; below is the full output of
the  ovs-appctl
dpif-netdev/pmd-rxq-show  command.
root at W:/# ovs-appctl dpif-netdev/pmd-rxq-show
pmd thread numa_id 0 core_id 1:
        isolated : false
        port: dpdk1     queue-id: 0 1 2 3 4 5 6 7
        port: dpdk0     queue-id: 0 1 2 3 4 5 6 7
        port: vhu1cbd23fd-82    queue-id: 0
        port: vhu018b3f01-39    queue-id: 0

what is strange for me and i don't understand is why do i have only one
queue in the vhost side and eight on the dpdk side. i understood that qemue
automatically had the same amount. though, i am using only one core for the
VM and one core for the PMD.
in this setting i have eight cores in the system, is that the reason that i
see eight possible queues?
The setup is North/South (VM to Physical network)
as for pinning the PMD, i always pin the PMD to core 1 (mask=0x2).

when i set the n_rxq and n_txq to high values (even 64 or above) i see no
drops for around a minute or two and then suddenly bursts of drops as if
the cache was filled. Have you seen something similar?
i tried to play with the "max-idle", but it didn't seem to help.

originally, i had a setup with 2.9 and 17.11 and i was not able to get
better, performance but it could be that i didn't tweak as much. However, i
am trying to deploy a setup that i can install without needing to MAKE.

Thank you for any input,
Michael

On Tue, Apr 17, 2018 at 6:28 PM, Stokes, Ian <ian.stokes at intel.com> wrote:

> Hi Michael,
>
>
>
> Are you using dpdk vhostuser ports in this deployment?
>
>
>
> I would expect to see them listed in the output of ovs-appctl
> dpif-netdev/pmd-rxq-show you posted below.
>
>
>
> Can you describe the expected traffic flow ( Is it North/South using DPDK
> phy devices as well as vhost devices or east/west between vm interfaces
> only).
>
>
>
> OVS 2.6 has the ability to isolate and pin rxq queues for dpdk devices to
> specific PMDs also. This can help provide more stable throughput and
> defined behavior. Without doing this I believe the distribution of rxqs was
> dealt with in a round robin manner which could change between deployments.
> This could explain what you are seeing i.e. sometimes the traffic runs
> without drops.
>
>
>
> You could try to examine ovs-appctl dpif-netdev/pmd-rxq-show when traffic
> is dropping and then again when traffic is passing without issue. This
> output along with the flows in each case might provide a clue as to what is
> happening. If there is a difference between the two you could investigate
> pinning the rxqs to the specific setup although you will only benefit from
> this when have at least 2 PMDs instead of 1.
>
>
>
> Also OVS 2.6 and DPDK 16.07 aren’t the latest releases of OVS & DPDK, have
> you tried the same tests using the latest OVS 2.9 and DPDK 17.11?
>
>
>
> Ian
>
>
>
> *From:* ovs-discuss-bounces at openvswitch.org [mailto:ovs-discuss-bounces@
> openvswitch.org] *On Behalf Of *michael me
> *Sent:* Tuesday, April 17, 2018 10:42 AM
> *To:* ovs-discuss at openvswitch.org
> *Subject:* [ovs-discuss] ovs-dpdk performance not stable
>
>
>
> Hi Everyone,
>
>
>
> I would greatly appreciate any input.
>
>
>
> The setting that i am working with is a host with ovs-dpdk connected to a
> VM.
>
>
>
> What i see when i do performance test is that after about a minute or two
> suddenly i have many drops as if the cache was full and was dumped
> improperly.
>
> I tried to play with the settings of the n-rxq and n_txq values, which
> helps but only probably until the cache is filled and then i have drops.
>
> The things is that sometimes, rarely, as if by chance the performance
> continues.
>
>
>
> My settings is as follows:
>
> OVS Version. 2.6.1
> DPDK Version. 16.07.2
> NIC Model. Ethernet controller: Intel Corporation Ethernet Connection I354
> (rev 03)
> pmd-cpu-mask. on core 1 mask=0x2
> lcore mask. core zeor "dpdk-lcore-mask=1"
>
>
>
> Port "dpdk0"
>
>             Interface "dpdk0"
>
>                 type: dpdk
>
>                 options: {n_rxq="8", n_rxq_desc="2048", n_txq="9",
> n_txq_desc="2048"}
>
>
>
> ovs-appctl dpif-netdev/pmd-rxq-show
>
> pmd thread numa_id 0 core_id 1:
>
>         isolated : false
>
>         port: dpdk0     queue-id: 0 1 2 3 4 5 6 7
>
>         port: dpdk1     queue-id: 0 1 2 3 4 5 6 7
>
>
>
> Thanks,
>
> Michael
>
-------------- next part --------------
An HTML attachment was scrubbed...
URL: <http://mail.openvswitch.org/pipermail/ovs-discuss/attachments/20180418/b8ea7e7a/attachment-0001.html>


More information about the discuss mailing list