[ovs-dev] [PATCH v5] Detailed packet drop statistics per dpdk and vhostuser ports

Sriram Vatala sriram.v at altencalsoftlabs.com
Fri Aug 23 12:43:44 UTC 2019



-----Original Message-----
From: Ilya Maximets <i.maximets at samsung.com> 
Sent: 06 August 2019 22:02
To: Sriram Vatala <sriram.v at altencalsoftlabs.com>; 'Ben Pfaff'
<blp at ovn.org>; 'Stokes, Ian' <ian.stokes at intel.com>
Cc: ovs-dev at openvswitch.org
Subject: Re: [PATCH v5] Detailed packet drop statistics per dpdk and
vhostuser ports

On 24.07.2019 11:55, Sriram Vatala wrote:
> Hi,
> 
> @Ben : Thanks for the response.
> 
> @Ilya, @Ian : Can you please review the patch and provide comments if any.

Hi.
Thanks for working on this!

One thing about the patch is that it modifies the hot path, thus needs a
performance evaluation before applying. I hope to have some time for it this
week. Have you tested performance difference with and without this patch?

>>>> Sorry for late reply. I have tested the performance without and with
this patch. I had tested performance for various combinations of  packet
sizes and flows like 64 B , 128 B, 256 B, 512 B packets each with 1, 10, 400
,1000, 10000 and 1000000 flows. I tested these combinations with UDP traffic
and VxLan traffic. There is no degradation observed in test cases. 

Also, I see is that you're inserting fairly big array into PADDED_MEMBERS
block. There are few issues with that:

1. There is no need to store the whole 'struct netdev_custom_counter' for
   each netdev instance because names takes 64 bytes each and they are
   same for each netdev instance anyway.

2. You're not paying attention to the amount of pad bytes in this section
   after the change. I suspect a big hole in the structure here.

>>>> I have checked your recent patch in the mailing list, implementation
for stats fetching ("Refactor vhost custom stats for extensibility"). I will
adopt the same implementation for fetching the dpdk/vhost custom stats. Will
send the updated patch v6. Thanks for your comments.

Thanks & Regards,
Sriram.

   Regarding this issue, actually, I'd like to remove this cacheline
   alignments completely from the structure (it had no performance impact
   for me previously), but it's a different change and there was no active
   support from the community when I wanted to do that few years ago.
   However, there was no strong objections too.
   Ian, do you have some thought about this?

Best regards, Ilya Maximets.

> 
> Thanks,
> Sriram.
> 
> -----Original Message-----
> From: Ben Pfaff <blp at ovn.org>
> Sent: 22 July 2019 21:37
> To: Sriram Vatala <sriram.v at altencalsoftlabs.com>
> Cc: ovs-dev at openvswitch.org
> Subject: Re: [PATCH v5] Detailed packet drop statistics per dpdk and 
> vhostuser ports
> 
> On Mon, Jul 22, 2019 at 03:31:53PM +0530, Sriram Vatala wrote:
>> OVS may be unable to transmit packets for multiple reasons and today 
>> there is a single counter to track packets dropped due to any of 
>> those reasons. The most common reason is that a VM is unable to read 
>> packets fast enough causing the vhostuser port transmit queue on the 
>> OVS side to become full. This manifests as a problem with VNFs not 
>> receiving all packets. Having a separate drop counter to track 
>> packets dropped because the transmit queue is full will clearly 
>> indicate that the problem is on the VM side and not in OVS. Similarly 
>> maintaining separate counters for all possible drops helps in 
>> indicating sensible cause for packet drops.
>>
>> This patch adds counters for custom stats to track packets dropped at 
>> port level and these stats are displayed along with other stats in 
>> "ovs-vsctl get interface <iface> statistics"
>> command. The detailed stats will be available for both dpdk and 
>> vhostuser ports.
>>
>> Signed-off-by: Sriram Vatala <sriram.v at altencalsoftlabs.com>
> 
> Thanks for the revision!  I'm happy with the bits that are important to
me.
> I'll leave the final review to Ian or Ilya.
> 


More information about the dev mailing list