[ovs-discuss] ovs-vswitchd port limit with OpenStack

William Konitzer wkonitzer at mirantis.com
Wed May 8 18:00:08 UTC 2019

Hi Ben and Flavio,

Thanks for responding. I should clarify that the performance problem is with the control plane, not the data plane.

What we’re encountering is more like the problem described here
https://mail.openvswitch.org/pipermail/ovs-discuss/2014-September/034907.html <https://mail.openvswitch.org/pipermail/ovs-discuss/2014-September/034907.html>

We’ve using OpenStack Neutron (ovsdbapp library) to update ovsdb data, and networks are provisioned as customers are added to the cloud. As we don’t know in advance what the customer will create we can’t bulk provision. What we’re finding is once we have about 1500 ports adding new networks becomes extremely slow due to ovsbd taking a long time.

Is this something you’ve come across before? We’re using Open vSwitch 2.8.0. I’m open to upgrading but I can’t see anything in the change logs that immediately suggest an upgrade would help.

Kind regards,

> On May 2, 2019, at 2:49 PM, Flavio Leitner <fbl at sysclose.org> wrote:
> On Thu, May 02, 2019 at 04:44:42PM -0300, Flavio Leitner via discuss wrote:
>> On Tue, Apr 30, 2019 at 04:50:48PM -0700, Ben Pfaff wrote:
>>> On Fri, Apr 26, 2019 at 11:52:22AM -0500, William Konitzer wrote:
>>>> I'm reading
>>>> (http://www.openvswitch.org/support/dist-docs/ovs-vswitchd.8.txt
>>>> section LIMITS) and it says "Performance will degrade beyond 1,024
>>>> ports per bridge due to fixed hash table sizing.” Do we have a little
>>>> more info on what that means and what to expect for less experienced
>>>> users like myself?
>>> I think that this comment is now obsolete.  There was a fairly recent
>>> change that should have reduced the cost of a port.  The kernel hash
>>> table is still fixed in size but I don't think it's accessed on any fast
>>> path so I think in practice it doesn't matter.
>>>> The background here is we’re working with OpenStack and seeing
>>>> performance issues when lots of networks are created.. Once we have
>>>> more than about 1500 ports on the br-int on a gateway node it seems to
>>>> take a long time to add new ports.
>> You might want to bump the default netdev_max_backlog because that
>> is the maximum amount of packets queued. So, if you have too many
>> ports, there will be either packet loss, or slow path'ed traffic.
> To clarify, it depends on the actions. If you are using action
> NORMAL and there is a broadcast for example, all ports need a
> packet copy, which means more than 1k packets will be queued.
> IIRC OvS will slow path this case to prevent packet loss in
> the recent versions.
> fbl

-------------- next part --------------
An HTML attachment was scrubbed...
URL: <http://mail.openvswitch.org/pipermail/ovs-discuss/attachments/20190508/d04e4059/attachment.html>

More information about the discuss mailing list