[ovs-discuss] max mega flow 64k per pmd or per dpcls?

Darrell Ball dball at vmware.com
Fri Jun 30 17:02:29 UTC 2017



From: Hui Xiang <xianghuir at gmail.com>
Date: Thursday, June 29, 2017 at 6:57 PM
To: Darrell Ball <dball at vmware.com>
Cc: "Bodireddy, Bhanuprakash" <bhanuprakash.bodireddy at intel.com>, "ovs-discuss at openvswitch.org" <ovs-discuss at openvswitch.org>
Subject: Re: [ovs-discuss] max mega flow 64k per pmd or per dpcls?

I am interested about how to define 'reasonable' here, how it is got and what what is the 'many case'? is there any document/link to refer this information, please shed me some light.

It is based on real usage scenarios for the number of megaflows needed.
The usage may be less in most cases.
In cases where larger, it may imply that more threads are better and dividing among queues.

Why do think having more than 64k per PMD would be optimal ?
What is your use case(s) ?
Are you wanting for this number to be larger by default ?
Are you wanting for this number to be configurable ?


On Thu, Jun 29, 2017 at 10:47 PM, Darrell Ball <dball at vmware.com<mailto:dball at vmware.com>> wrote:
Q: “how it is calculated in such an exact number? “

A: It is a reasonable number to accommodate many cases.

Q: “If there are more ports added for polling, for avoid competing can I increase the 64k size into a
bigger one?”

A: If a larger number is needed, it may imply that adding another PMD and dividing the forwarding
work would be best.  Maybe even a smaller number of flows may be best served with more PMDs.





On 6/29/17, 7:23 AM, "ovs-discuss-bounces at openvswitch.org<mailto:ovs-discuss-bounces at openvswitch.org> on behalf of Bodireddy, Bhanuprakash" <ovs-discuss-bounces at openvswitch.org<mailto:ovs-discuss-bounces at openvswitch.org> on behalf of bhanuprakash.bodireddy at intel.com<mailto:bhanuprakash.bodireddy at intel.com>> wrote:

    >

    >I guess the answer is now the general LLC is 2.5M per core so that there is 64k

    >flows per thread.



    AFAIK, the no. of flows here may not have to do anything with LLC.  Also there is EMC cache(8k entries) of ~4MB per PMD thread.





    Yes the performance will be nice with simple test cases (P2P with 1 PMD thread) as most of this fits in to LLC. But in real scenarios  OvS-DPDK can be memory bound.



    BTW, on my DUT the LLC is 35MB and has 28 cores and so the assumption of 2.5M/core isn't right.



    - Bhanuprakash.



    >

    >On Fri, Jun 23, 2017 at 11:15 AM, Hui Xiang <xianghuir at gmail.com<mailto:xianghuir at gmail.com>> wrote:

    >Thanks Darrell,

    >

    >More questions:

    >Why not allocating 64k for each dpcls? does the 64k just fit in L3 cache or

    >anywhere? how it is calculated in such an exact number?  If there are more

    >ports added for polling, for avoid competing can I increase the 64k size into a

    >bigger one? Thanks.

    >

    >Hui.

    >

    >


    _______________________________________________
    discuss mailing list
    discuss at openvswitch.org<mailto:discuss at openvswitch.org>
    https://urldefense.proofpoint.com/v2/url?u=https-3A__mail.openvswitch.org_mailman_listinfo_ovs-2Ddiscuss&d=DwIGaQ&c=uilaK90D4TOVoH58JNXRgQ&r=BVhFA09CGX7JQ5Ih-uZnsw&m=-aL2AdnELLqgfD2paHXevABAGM7lXVTVcc-WMLHqINE&s=pSk0G_pj9n5VvpbG_ukDYkjSnSmA9Q9h37XchMZofuU&e=


-------------- next part --------------
An HTML attachment was scrubbed...
URL: <http://mail.openvswitch.org/pipermail/ovs-discuss/attachments/20170630/8fed566e/attachment.html>


More information about the discuss mailing list