[ovs-discuss] max mega flow 64k per pmd or per dpcls?

Hui Xiang xianghuir at gmail.com
Sat Jul 1 00:33:49 UTC 2017


Thanks Darrell, comment inline.

On Sat, Jul 1, 2017 at 1:02 AM, Darrell Ball <dball at vmware.com> wrote:

>
>
>
>
> *From: *Hui Xiang <xianghuir at gmail.com>
> *Date: *Thursday, June 29, 2017 at 6:57 PM
> *To: *Darrell Ball <dball at vmware.com>
> *Cc: *"Bodireddy, Bhanuprakash" <bhanuprakash.bodireddy at intel.com>, "
> ovs-discuss at openvswitch.org" <ovs-discuss at openvswitch.org>
> *Subject: *Re: [ovs-discuss] max mega flow 64k per pmd or per dpcls?
>
>
>
> I am interested about how to define 'reasonable' here, how it is got and
> what what is the 'many case'? is there any document/link to refer this
> information, please shed me some light.
>
>
>
> It is based on real usage scenarios for the number of megaflows needed.
>
> The usage may be less in most cases.
>
> In cases where larger, it may imply that more threads are better and
> dividing among queues.
>
Yes, more threads are better, but the overall cores are limited, more
threads pinned on cores for OVS-DPDK, less available for vms.

>
>
> Why do think having more than 64k per PMD would be optimal ?
>
I originally thought that the bottleneck in classifier because it is
saturated full so that look up has to be going to flow table, so I think
why not just increase the dpcls flows per PMD, but seems I am wrong based
on your explanation.

> What is your use case(s) ?
>
My usecase might be setup a VBRAS VNF with OVS-DPDK as an NFV normal case,
and it requires a good performance, however, OVS-DPDK seems still not reach
its needs compared with  hardware offloading, we are evaluating VPP as
well, basically I am looking to find out what's the bottleneck so far in
OVS-DPDK (seems in flow look up), and if there are some solutions being
discussed or working in progress.

> Are you wanting for this number to be larger by default ?
>
I am not sure, I need to understand whether it is good or bad to set it
larger.

> Are you wanting for this number to be configurable ?
>
Probably good.

>
>
BTW, after reading part of DPDK document, it strengthens to decrease to
copy between cache and memory and get cache hit as much as possible to get
fewer cpu cycles to fetch data, but now I am totally lost on how does
OVS-DPDK emc and classifier map to the LLC.

>
>
> On Thu, Jun 29, 2017 at 10:47 PM, Darrell Ball <dball at vmware.com> wrote:
>
> Q: “how it is calculated in such an exact number? “
>
> A: It is a reasonable number to accommodate many cases.
>
> Q: “If there are more ports added for polling, for avoid competing can I
> increase the 64k size into a
> bigger one?”
>
> A: If a larger number is needed, it may imply that adding another PMD and
> dividing the forwarding
> work would be best.  Maybe even a smaller number of flows may be best
> served with more PMDs.
>
>
>
>
>
>
> On 6/29/17, 7:23 AM, "ovs-discuss-bounces at openvswitch.org on behalf of
> Bodireddy, Bhanuprakash" <ovs-discuss-bounces at openvswitch.org on behalf
> of bhanuprakash.bodireddy at intel.com> wrote:
>
>     >
>
>     >I guess the answer is now the general LLC is 2.5M per core so that
> there is 64k
>
>     >flows per thread.
>
>
>
>     AFAIK, the no. of flows here may not have to do anything with LLC.
> Also there is EMC cache(8k entries) of ~4MB per PMD thread.
>
>
>
>
>
>     Yes the performance will be nice with simple test cases (P2P with 1
> PMD thread) as most of this fits in to LLC. But in real scenarios  OvS-DPDK
> can be memory bound.
>
>
>
>     BTW, on my DUT the LLC is 35MB and has 28 cores and so the assumption
> of 2.5M/core isn't right.
>
>
>
>     - Bhanuprakash.
>
>
>
>     >
>
>     >On Fri, Jun 23, 2017 at 11:15 AM, Hui Xiang <xianghuir at gmail.com>
> wrote:
>
>     >Thanks Darrell,
>
>     >
>
>     >More questions:
>
>     >Why not allocating 64k for each dpcls? does the 64k just fit in L3
> cache or
>
>     >anywhere? how it is calculated in such an exact number?  If there are
> more
>
>     >ports added for polling, for avoid competing can I increase the 64k
> size into a
>
>     >bigger one? Thanks.
>
>     >
>
>     >Hui.
>
>     >
>
>     >
>
>
>     _______________________________________________
>     discuss mailing list
>     discuss at openvswitch.org
>     https://urldefense.proofpoint.com/v2/url?u=https-3A__mail.
> openvswitch.org_mailman_listinfo_ovs-2Ddiscuss&d=DwIGaQ&c=
> uilaK90D4TOVoH58JNXRgQ&r=BVhFA09CGX7JQ5Ih-uZnsw&m=-
> aL2AdnELLqgfD2paHXevABAGM7lXVTVcc-WMLHqINE&s=pSk0G_pj9n5VvpbG_
> ukDYkjSnSmA9Q9h37XchMZofuU&e=
>
>
>
-------------- next part --------------
An HTML attachment was scrubbed...
URL: <http://mail.openvswitch.org/pipermail/ovs-discuss/attachments/20170701/67609b3c/attachment-0001.html>


More information about the discuss mailing list