[ovs-dev] [PATCH] datapath: Allow the number of hash entries to exceed TBL_MAX_BUCKETS

Jesse Gross jesse at nicira.com
Wed Aug 3 04:30:01 UTC 2011


On Wed, Aug 3, 2011 at 9:09 AM, Simon Horman <horms at verge.net.au> wrote:
> On Tue, Aug 02, 2011 at 12:09:00PM +0700, Jesse Gross wrote:
>> On Mon, Aug 1, 2011 at 8:55 PM, Simon Horman <horms at verge.net.au> wrote:
>> > * If the number of entries in a table exceeds
>> >  the number of buckets that it has then
>> >  an attempt will be made to resize the table.
>> >
>> > * There is a limit of TBL_MAX_BUCKETS placed on
>> >  the number of buckets of a table.
>> >
>> > * If this limit is exceeded keep using the existing table.
>> >  This allows a table to hold more than TBL_MAX_BUCKETS
>> >  entries at the expense of increased hash collisions.
>> >
>> > Signed-off-by: Simon Horman <horms at verge.net.au>
>> >
>> > ---
>> >
>> > It appears that on 64-bit systems TBL_MAX_BUCKETS
>> > is 131072 (128k) not 262144 (256k) as noted in
>> > the comment with to its definition.
>> >
>> > Without this change the number of flows that
>> > can be present in the datapath is limited to 128k.
>> > With this change I am able achieve significantly
>> > higher flow counts.
>>
>> I don't think it's true that TBL_MAX_BUCKETS is 128k.  I double
>> checked the math and printed out the value of TBL_MAX_BUCKETS and both
>> times came out to 256k.  I don't necessarily object to this patch per
>> se but it's based on the premise that that the size is the limiting
>> factor and that doesn't really seem to be the case so I'd like to
>> understand what is going on a little better before applying this.
>> Could it be that your test is composed of bidirectional flows (like
>> TCP)?  That could explain the discrepancy as this is counting flows in
>> each direction.
>
> Hi Jesse,
>
> thanks for pulling me up on that one. While
> I stand by my patch I now think that my
> comment relating to TBL_MAX_BUCKETS being 128k is wrong.
>
> On further examination I see that TBL_MAX_BUCKETS is 256k,
> however the maximum number of buckets in the hash table is
> being limited to half that number due to the following logic.
>
> struct tbl *tbl_expand(struct tbl *table)
> {
>        int err;
>        int n_buckets = table->n_buckets * 2;
>        struct tbl *new_table;
>
>        if (n_buckets >= TBL_MAX_BUCKETS) {
>                err = -ENOSPC;
>                goto error;
>        }
>        ...
> }
>
> I think that the '>=' should be '>'
> and I will submit a patch to make that change.

Thanks for tracking it down.  This is exactly the type of bug that I
was worried that we might be masking by allowing additional flows
before what the limit should be.



More information about the dev mailing list