[ovs-dev] [OVS-dev]: OVN: RFC re: logical and physical endpoint separation proposal

Mickey Spiegel emspiege at us.ibm.com
Tue Feb 16 22:32:48 UTC 2016


Darrell,

Just catching up on this thread. A few things are still unclear.

The example that you gave bound the one "localnet" logical port to one physical endpoint. Perhaps this is what you are intending for the L3 gateway case (still waiting for that proposal).

In existing OVN, VMs can connect directly to provider networks, requiring each "localnet" logical port to be instantiated on each ovn-controller, i.e. there are multiple chassis/chassis-port bindings, each one done locally on each hypervisor based on local ovn-bridge-mapping configuration.
Does your proposal support this case?
If so, which of the following do you do?The chassis and chassis_port columns are empty. On each hypervisor, ovn-bridge-mapping still needs to be configured.
There is a list of phys_endpts for each localnet, one per chassis?

 This replaces the bridge mapping configured on each hypervisor?What is the relationship between the "chassis" column in port bindings, and the "chassis" column in physical endpoints?
For L2 gateway, I think I am beginning to understand how this would work. The L2 gateway still has to populate MACs up in southbound DB.

For L3 gateway, without a detailed proposal, I don't know how this fits yet.
Are you adding a new port type for L3 external gateway ports?
Are those ports bound to a chassis rather than run locally on each ovn-controller?
Are the provider networks run on only one chassis rather than each ovn-controller?

  To the extent that this proposal is meant to replace the “tag” column with something more generic that can support different encapsulations, this is a very good thing. As Kyle mentioned, we are interested in supporing VXLAN from OpenStack/OVN to upstream physical routers.

Mickey


-----"dev" <dev-bounces at openvswitch.org> wrote: -----
To: Russell Bryant <russell at ovn.org>, Darrell Lu <dlu998 at gmail.com>, "dev at openvswitch.org" <dev at openvswitch.org>
From: Darrell Ball 
Sent by: "dev" 
Date: 02/11/2016 01:51PM
Subject: Re: [ovs-dev] [OVS-dev]: OVN: RFC re: logical and physical endpoint separation proposal

On 2/11/16, 12:20 PM, "Russell Bryant" <russell at ovn.org> wrote:


>On 02/10/2016 09:56 PM, Darrell Ball wrote:
>> Hi Russell
>> 
>> Please see inline
>> 
>> Thanks Darrell
>> 
>> 
>> 
>> On 2/8/16, 12:38 PM, "Russell Bryant" <russell at ovn.org> wrote:
>> 
>>> On 02/08/2016 12:05 PM, Darrell Ball wrote:
>>>> On 2/5/16, 12:23 PM, "Russell Bryant" <russell at ovn.org> wrote:
>>>>> I agree with this sort of separation in principle.  Some specific
>>>>> examples would help me understand the proposal, though.  You mention
>>>>> that this applies to both localnet and gateway cases.  Can we lay out
>>>>> some clear workflows before and after the proposed changes?
>>>>>
>>>>> The simplest localnet example would be connecting a single VM to a
>>>>> physical network locally attached to a hypervisor.
>>>>>
>>>>> On the hypervisor running, ovn-controller, we set:
>>>>>
>>>>>    $ ovs-vsctl set open . \
>>>>>    > external-ids:ovn-bridge-mappings=physnet1:br-eth1
>>>>>
>>>>> Then, we set up the logical connectivity with:
>>>>>
>>>>>    $ ovn-nbctl lswitch-add provnet1
>>>>>
>>>>>    $ ovn-nbctl lport-add provnet1 provnet1-lp1
>>>>>    $ ovn-nbctl lport-set-addresses provnet1-lp1 $MAC
>>>>>    $ ovn-nbctl lport-set-port-security provnet1-lp1 $MAC
>>>>>
>>>>>    $ ovn-nbctl lport-add provnet1 provnet1-physnet1
>>>>>    $ ovn-nbctl lport-set-addresses provnet1-physnet1 unknown
>>>>>    $ ovn-nbctl lport-set-type provnet1-physnet1 localnet
>>>>>    $ ovn-nbctl lport-set-options provnet1-physnet1 \
>>>>>    > network_name=physnet1
>>>>>
>>>>> Then we can create the VIF on the hypervisor like usual.
>>>>>
>>>>> How does your proposal modify the workflow for this use case?
>>>>
>>>> Localnet case: The NB programming is unchanged, as intended.
>>>>  
>>>> The SB programming using sb-ctl in lieu of CMS might be of
>>>> the form below.
>>>
>>> In this case, the CMS is only interfacing with the NB database.
>>>
>>>> This example assumes that we use the legacy endpoint type of
>>>> single_vlan and vlan 42 is used on chassis_port_0 on chassis_only
>>>> (which is our HV in this example).
>>>>
>>>> ovn-sbctl phys-endpt-add endpt_0 chassis_only chassis_port_0 single_vlan  42   42
>>>>
>>>>  
>>>> ovn-sbctl    lport-bind-phys-endpt   provnet1-1-physnet1   endpt_0
>>>
>>> I'm sorry if I'm being dense, but I'm afraid that I don't understand
>>> what this is replacing.
>
>Note the above question.
>
>
>>>
>>>>> It would be nice to see the same sort of thing for gateways.  The
>>>>> OpenStack driver already has code for the current vtep gateway
>>>>> integration.  We set vtep_logical_switch and vtep_physical_switch on a
>>>>> logical port.  What new workflow would we need to implement?
>>>>
>>>>
>>>> Gateway case: Consider ls0-port2 is a logical endpt on a gateway
>>>>
>>>>  
>>>> ovn-nbctl lswitch-add ls0
>>>> .
>>>> .
>>>> ovn-nbctl lport-add ls0 ls0-port2
>>>> .
>>>> .
>>>> ovn-nbctl lport-set-addresses ls0-port2 52:54:00:f3:1c:c6
>>>> .
>>>> .
>>>> ovn-nbctl lport-set-type ls0-port2 vtep
>>>>
>>>>  
>>>> The NB programming lport-set-options, of the form:
>>>> “ovn-nbctl lport-set-options ls0-port2 vtep-physical-switch=br-int vtep-logical-switch=ls0”
>>>> could be omitted and the same information could be derived from
>>>> other logical/physical binding. SB programming semantics, assuming that we use
>>>> the legacy endpoint type and vlan 42 is used on chassis_port_0 on chassis_0 (a gateway):
>>>>
>>>> ÿ
>>>> ovn-sbctl phys-endpt-add endpt_0 chassis_0 chassis_port_0 single_vlan ÿ 42 ÿ 42
>>>>
>>>> ÿ
>>>> ovn-sbctl ÿ ÿlport-bind-phys-endpt ÿ ls0-port2 ÿendpt_0
>>>
>>> Is this right?
>>>
>>> 1) We're dropping the use of vtep-physical-switch and
>>> vtep-logical-switch options and instead getting the same information
>>>from logical-to-physical mappings in the southbound database.
>> 
>> That,s the proposal
>> The logical port association to 
>> 1) vtep Physical switch, can be derived from the port_binding/chassis tables in the SB DB
>> 2) vtep logical switch, can come down to the SB DB via information in the
>> ÿ NB DB Logical Switch/Logical Port Tables 
>> 
>> 
>>>
>>> 2) (I'm less sure on this part) We're replacing direct management of the
>>> hardware_vtep schema with defining endpoings in the physical endpoint
>>> table in OVN's southbound db?
>> 
>> 
>> For SW gateway, we don,t plan to support the hardware_vtep schema and use a
>> common code path b/w gateway and HV transport nodes, as much as possible. Hence the SB DB is one option
>> to house the physical endpt table which is closely associated with the port binding table.
>> The gateway or gateway pair/cluster supports the overall network.
>
>Are you planning on dropping hardware_vtep support? ÿAre there two
>separate worksflows (software gateways vs hardware_vtep)?


There are two separate workflows for SW and HW gateways.

Hardware_vtep support remains for hardware gateways; the vtep schema
will certainly evolve as well to support the hardware gateways.
Since there is only minimal usage support of the VTEP schema today from a 
SW gateway POV, in the form of the "vtep-emulator", not much is lost by abandoning
the VTEP schema w.r.t. the new software gateway development.

There will be some loss of OVN gateway reference behavior for the hardware vendors
since SW and HW gateways are working from different DB schemas, but since hardware
vendor designs/implementations differ b/w themselves and from SW approaches,
there is limited value to be maintained by not splitting SW and HW support.


>
>If you think it'd be easier to just proceed with your implementation and
>then it will be easier to understand, that's fine with me.

Ok, thanks


>
>-- 
>Russell Bryant
_______________________________________________
dev mailing list
dev at openvswitch.org
http://openvswitch.org/mailman/listinfo/dev





More information about the dev mailing list