[ovs-discuss] OVS DPDK performance for TCP traffic versus UDP

Onkar Pednekar onkar3006 at gmail.com
Tue Nov 27 18:42:39 UTC 2018


Hi,

I managed to solve this performance issue. I got improved performance after
turning off the mrg_rxbuf and increasing the rx and tx queue sizes to 1024.

Thanks,
Onkar

On Thu, Nov 8, 2018 at 2:57 PM Onkar Pednekar <onkar3006 at gmail.com> wrote:

> Hi,
>
> We figured out that the packet processing appliance within VM (which reads
> from raw socket on the dpdk vhost user interface) requires more packets per
> second to give higher throughput. Else its cpu utilization is idle most of
> the times.
>
> We increased the "tx-flush-interval" from default 0 to 500 and the
> throughput increased from 300 mbps to 600 mbps (but we expect 1G). Also, we
> saw that the PPS on the VM RX interface increased from 35 kpps to 68 kpps.
> Higher values of "tx-flush-interval" doesn't help.
>
> Also disabling mgr_rxbuf seems to give better performance, i.e.
> virtio-net-pci.mgr_rx_buf=off in qemu. But still the pps are around 65 k on
> the VM dpdk vhostuser interface RX and the throughput below 700 mbps.
>
> *Are there any other parameters that can be tuned to increase the amount
> of packets per second forwarded from phy dpdk interface to the dpdk
> vhostuser interface inside the VM?*
>
> Thanks,
> Onkar
>
> On Fri, Oct 5, 2018 at 1:45 PM Onkar Pednekar <onkar3006 at gmail.com> wrote:
>
>> Hi Tiago,
>>
>> Sure. I'll try that.
>>
>> Thanks,
>> Onkar
>>
>> On Fri, Oct 5, 2018 at 9:06 AM Lam, Tiago <tiago.lam at intel.com> wrote:
>>
>>> Hi Onkar,
>>>
>>> Thanks for shedding some light.
>>>
>>> I don't think your difference in performance will have to do your
>>> OvS-DPDK setup. If you're taking the measurements directly from the
>>> iperf server side you'd be going through the "Internet". Assuming you
>>> don't have a dedicated connection there, things like your connection's
>>> bandwidth, the RTT from end to end start to matter considerably,
>>> specially for TCP.
>>>
>>> To get to the bottom of it I'd advise you to take the iperf server and
>>> connect it directly to the first machine (Machine 1). You would be
>>> excluding any "Internet" interference and be able to get the performance
>>> of a pvp scenario first.
>>>
>>> Assuming you're using kernel forwarding inside the VMs, if you want to
>>> squeeze in the extra performance it is probably wise to use DPDK testpmd
>>> to forward the traffic inside of the VMs as well, as explained here:
>>>
>>> http://docs.openvswitch.org/en/latest/howto/dpdk/#phy-vm-phy-vhost-loopback
>>>
>>> Regards,
>>> Tiago.
>>>
>>> On 04/10/2018 21:06, Onkar Pednekar wrote:
>>> > Hi Tiago,
>>> >
>>> > Thanks for your reply.
>>> >
>>> > Below are the answers to your questions in-line.
>>> >
>>> >
>>> > On Thu, Oct 4, 2018 at 4:07 AM Lam, Tiago <tiago.lam at intel.com
>>> > <mailto:tiago.lam at intel.com>> wrote:
>>> >
>>> >     Hi Onkar,
>>> >
>>> >     Thanks for your email. Your setup isn't very clear to me, so a few
>>> >     queries in-line.
>>> >
>>> >     On 04/10/2018 06:06, Onkar Pednekar wrote:
>>> >     > Hi,
>>> >     >
>>> >     > I have been experimenting with OVS DPDK on 1G interfaces. The
>>> >     system has
>>> >     > 8 cores (hyperthreading enabled) mix of dpdk and non-dpdk capable
>>> >     ports,
>>> >     > but the data traffic runs only on dpdk ports.
>>> >     >
>>> >     > DPDK ports are backed by vhost user netdev and I have configured
>>> the
>>> >     > system so that hugepages are enabled, CPU cores isolated with PMD
>>> >     > threads allocated to them and also pinning the VCPUs.>
>>> >     > When I run UDP traffic, I see ~ 1G throughput on dpdk interfaces
>>> >     with <
>>> >     > 1% packet loss. However, with tcp traffic, I see around 300Mbps
>>> >     > thoughput. I see that setting generic receive offload to off
>>> >     helps, but
>>> >     > still the TCP thpt is very less compared to the nic capabilities.
>>> >     I know
>>> >     > that there will be some performance degradation for TCP as
>>> against UDP
>>> >     > but this is way below expected.
>>> >     >
>>> >
>>> >     When transmitting traffic between the DPDK ports, what are the
>>> flows you
>>> >     have setup? Does it follow a p2p or pvp setup? In other words,
>>> does the
>>> >     traffic flow between the VM and the physical ports, or only between
>>> >     physical ports?
>>> >
>>> >
>>> >  The traffic is between the VM and the physical ports.
>>> >
>>> >
>>> >     > I don't see any packets dropped for tcp on the internal VM
>>> (virtual)
>>> >     > interfaces.
>>> >     >
>>> >     > I would like to know if there is an settings (offloads) for the
>>> >     > interfaces or any other config I might be missing.
>>> >
>>> >     What is the MTU set on the DPDK ports? Both physical and
>>> vhost-user?
>>> >
>>> >     $ ovs-vsctl get Interface [dpdk0|vhostuserclient0] mtu
>>> >
>>> >
>>> > MTU set on physical ports = 2000
>>> > MTU set on vhostuser ports = 1500
>>> >
>>> >
>>> >     This will help to clarify some doubts around your setup first.
>>> >
>>> >     Tiago.
>>> >
>>> >     >
>>> >     > Thanks,
>>> >     > Onkar
>>> >     >
>>> >     >
>>> >     > _______________________________________________
>>> >     > discuss mailing list
>>> >     > discuss at openvswitch.org <mailto:discuss at openvswitch.org>
>>> >     > https://mail.openvswitch.org/mailman/listinfo/ovs-discuss
>>> >     >
>>> >
>>>
>>
-------------- next part --------------
An HTML attachment was scrubbed...
URL: <http://mail.openvswitch.org/pipermail/ovs-discuss/attachments/20181127/15d0ba29/attachment-0001.html>


More information about the discuss mailing list