[ovs-dev] [PATCH 08/11] metaflow: Extend size of mf_value to 128 bytes.

Jesse Gross jesse at nicira.com
Wed Jul 1 06:21:15 UTC 2015


On Tue, Jun 30, 2015 at 7:56 AM, Loftus, Ciara <ciara.loftus at intel.com> wrote:
>>
>> On Wed, Jun 24, 2015 at 1:17 PM, Ben Pfaff <blp at nicira.com> wrote:
>> > On Fri, Jun 19, 2015 at 04:13:22PM -0700, Jesse Gross wrote:
>> >> Tunnel metadata can be substantially larger than our existing fields
>> >> (up to 124 bytes in a single Geneve option) so this extends the size
>> >> of the data that we can handle with metaflow fields. This also
>> >> breaks a few tests that assume that their max size is also the
>> >> maximum that can be handled in a field.
>> >>
>> >> Signed-off-by: Jesse Gross <jesse at nicira.com>
>> >
>> > Did you look around at all to see whether this will unreasonably blow up
>> > any data or algorithms?
>>
>> I don't believe that it should have any significant effects.
>> Generally, code does operations on the fields based on mf->n_bytes
>> (with the exception of some memset()s here and there). I don't think
>> that we really store these in a large number for any real period of
>> time.
>
> With this series of patches, in particular patch 10/11 "tunnel: Geneve TLV handling support for OpenFlow" I've measured a significant decrease in performance with the dpdkport type. For example, with a loopback test with 64Byte packets I've seen a 25% decrease in throughput.
> I suspect this is in relation to the size of the new tun_metadata struct. A quick perf analysis and I see we're spending significantly more time initialising packet metadata in the dp_netdev_process_rxq_port function.
> Are there any plans to address this performance degradation?

Thanks for pointing that out. I just sent out a patch that should
hopefully avoid the problem of needing to initialize the newly
enlarged structure. I don't have a great way of doing performance
testing on it, would you mind seeing if it solves the problem you're
seeing?



More information about the dev mailing list