[ovs-dev] [PATCH v2] ovn: Add a new logical switch port type - 'virtual'

Numan Siddique nusiddiq at redhat.com
Wed May 15 18:55:06 UTC 2019


On Thu, May 16, 2019 at 12:10 AM Han Zhou <zhouhan at gmail.com> wrote:

>
>
> On Wed, May 15, 2019 at 11:36 AM <nusiddiq at redhat.com> wrote:
> >
> > From: Numan Siddique <nusiddiq at redhat.com>
> >
> > This new type is added for the following reasons:
> >
> >   - When a load balancer is created in an OpenStack deployment with
> Octavia
> >     service, it creates a logical port 'VIP' for the virtual ip.
> >
> >   - This logical port is not bound to any VIF.
> >
> >   - Octavia service creates a service VM (with another logical port 'P'
> which
> >     belongs to the same logical switch)
> >
> >   - The virtual ip 'VIP' is configured on this service VM.
> >
> >   - This service VM provides the load balancing for the VIP with the
> configured
> >     backend IPs.
> >
> >   - Octavia service can be configured to create few service VMs with
> active-standby mode
> >     with the active VM configured with the VIP.  The VIP can move between
> >     these service nodes.
> >
> > Presently there are few problems:
> >
> >   - When a floating ip (externally reachable IP) is associated to the
> VIP and if
> >     the compute nodes have external connectivity then the external
> traffic cannot
> >     reach the VIP using the floating ip as the VIP logical port would be
> down.
> >     dnat_and_snat entry in NAT table for this vip will have
> 'external_mac' and
> >     'logical_port' configured.
> >
> >   - The only way to make it work is to clear the 'external_mac' entry so
> that
> >     the gateway chassis does the DNAT for the VIP.
> >
> > To solve these problems, this patch proposes a new logical port type -
> virtual.
> > CMS when creating the logical port for the VIP, should
> >
> >  - set the type as 'virtual'
> >
> >  - configure the VIP in the newly added column
> Logical_Switch_Port.virtual_ip
> >
> >  - And set the virtual parents in the new added column
> Logical_Switch_Port.virtual_parents.
> >    These virtual parents are the one which can be configured wit the VIP.
> >
> > If suppose the virtual_ip is configured to 10.0.0.10 on a virtual
> logical port 'sw0-vip'
> > and the virtual_parents are set to - [sw0-p1, sw0-p2] then below logical
> flows are added in the
> > lsp_in_arp_rsp logical switch pipeline
> >
> >  - table=11(ls_in_arp_rsp), priority=100,
> >    match=(inport == "sw0-p1" && ((arp.op == 1 && arp.spa == 10.0.0.10 &&
> arp.tpa == 10.0.0.10) ||
> >                                  (arp.op == 2 && arp.spa == 10.0.0.10))),
> >    action=(bind_vport("sw0-vip", inport); next;)
> > - table=11(ls_in_arp_rsp), priority=100,
> >    match=(inport == "sw0-p2" && ((arp.op == 1 && arp.spa == 10.0.0.10 &&
> arp.tpa == 10.0.0.10) ||
> >                                  (arp.op == 2 && arp.spa == 10.0.0.10))),
> >    action=(bind_vport("sw0-vip", inport); next;)
> >
> > The action bind_vport will claim the logical port - sw0-vip on the
> chassis where this action
> > is executed. Since the port - sw0-vip is claimed by a chassis, the
> dnat_and_snat rule for
> > the VIP will be handled by the compute node.
> >
> > Signed-off-by: Numan Siddique <nusiddiq at redhat.com>
>
> Hi Numan, this looks interesting. I haven't reviewed code yet, but just
> some questions to better understand the feature.
>
> Firstly, can Octavia be implemented by using the distributed LB feature of
> OVN, instead of using dedicated node? What's the major gap for using the
> OVN LB?
>
>
Yes. Its possible to use the native OVN LB feature. There's already a
provider driver in Octavia for
OVN (
https://github.com/openstack/networking-ovn/blob/master/networking_ovn/octavia/ovn_driver.py
).
When creating the LB providing the option --provider-driver=ovn will create
OVN LB.

However OVN LB is limited to L4 and there are no health checks. Octavia
amphora driver supports lots of
features like L7, health check etc. I think we should definitely look into
adding the health monitor feature for OVN LB.
But I think supporting L7 LBs is out of question for OVN LB.
For complex load balancer needs, I think its better to rely on external
load balancers like the Octavia amphora driver.
Octavia amphora driver creates a service VM and runs an haproxy instance
inside it to provider the load balancing.




> Secondly, how is associating the floating-ip with the VIP configured
> currently?
>

networking-ovn creates a dnat_and_snat entry when floating ip is associated
for the VIP port. Right now it doesn't set
the external_mac and logical_port columns for DVR deployments. And this has
been a draw back.

>
> Thirdly, can static route be used to route the VIP to the VM, instead of
> creating a lport for the VIP? I.e. create a route in the logical router:
> destination - VIP, next hop - service VM IP.
>
>
I am not sure on this one. I think it may work in one scenario where
Octavia amphora driver creates one instance of the service VM.

It also provides the option for HA of the VIP. It creates multiple service
VMs in active-standby mode and HA managed by keepalived.
The master VM configures the VIP and runs the haproxy instance. If the
master VM goes down, then keepalived cluster will chose
another master and configure the VIP there. I am not sure in this scenario,
the static route option would work or not as the service VM IP
could change.

When a standby service VM becomes master, keepalived running there sends a
GARP for the VIP. That's the reason I took the approach
of binding the VIP port when an arp packet is seen for the VIP.

Hope this approach seems reasonable to you :) when you plan to take a look
at this patch.

Thanks
Numan

Thanks,
> Han
>


More information about the dev mailing list