[ovs-discuss] Connecting two servers over layer 2 openvswitch without encapsulation

Gilbert Standen gilstanden at hotmail.com
Fri Nov 13 06:02:35 UTC 2020


Ben, when I build a cluster database on a layer 3 network, I can push packets between cluster database nodes at MTU 1500 or use jumbo frames if I set that up and use MTU 9000.  But when I cluster databases over layer 2, I have to use something like MTU 1420 or MTU 8920 which databases typically do not like and do not play well with at least in my experience. So that's what I'm asking really.  But afaik encapsulating packets and using MTU 1420 or MTU 8920 is kind of a fundamental principle on layer 2 over layer 3, kind of like e=mc2 inescapable a fundamental principle that cannot be avoided.
________________________________
From: Ben Pfaff <blp at ovn.org>
Sent: Thursday, November 12, 2020 11:44 PM
To: Gilbert Standen <gilstanden at hotmail.com>
Cc: ovs-discuss at openvswitch.org <ovs-discuss at openvswitch.org>
Subject: Re: [ovs-discuss] Connecting two servers over layer 2 openvswitch without encapsulation

On Fri, Nov 13, 2020 at 05:29:10AM +0000, Gilbert Standen wrote:
> So this may be a very dumb question, but is there any way to connect
> two separate physical servers with layer 2 openvswitches on each
> separate server over a physical layer 3 network without using an
> encapsulation scheme (e.g. GRE, VXLAN, etc) ?  There are many
> workloads that are somewhat hobbled by having to use MTU other than
> 1500 or 9000, such as database workloads, and so I'm asking this
> probably dumb question, because just me, I cannot see how it is
> possible to avoid encapsulation when pushing data between servers over
> a layer 2 network running on a physical layer 3 network, but I thought
> I would ask anyway.  Thanks, Gilbert

It appears that you have described what networks do anyway.  No special
configuration is needed.
-------------- next part --------------
An HTML attachment was scrubbed...
URL: <http://mail.openvswitch.org/pipermail/ovs-discuss/attachments/20201113/67289394/attachment.html>


More information about the discuss mailing list