[ovs-discuss] OpenStack and OVN integration is failing on multi-node physical machines.(probably a bug)

Numan Siddique nusiddiq at redhat.com
Wed May 31 07:17:53 UTC 2017


Hi Pranab,

Request to not drop the mailing list

Please see below for comments.



On Sat, May 27, 2017 at 11:30 AM, pranab boruah <pranabjyotiboruah at gmail.com
> wrote:

> Thanks Numan for the reply. I modified the system service script of
> neutron server and made sure that it starts only after ovn-northd service
> is up and running.
>
> Able to launch VMs now.
> But the VMs doesn't get any dhcp ip. Is there any logs relevant to ovn
> native dhcp server that I can look for?
>

Since you are using Newton, you probably need to set ovn_native_dhcp=True
in /etc/neutron/plugin.ini or /etc/neutron/plugins/ml2/ml2_conf.ini.
Otherwise you are expected to use dhcp agent.



>
> I have another question:
> There are two ways to include the OVN specific configurations. One way is
> to add a new [ovn] section in /etc/neutron/plugin.ini file. Second way is
> to modify the /etc/neutron/plugins/networking-ovn/networking-ovn.ini
> file. Which is the right file that we should modify and if I had included
> the ovn configurations in both files, which one takes precedence?
>

Better to use  etc/neutron/plugin.ini or
/etc/neutron/plugins/ml2/ml2_conf.ini. I think the last included config
file overrides the values.


> After all this issues with the setup, we are planning to build a Triple-O
> setup.  I remember Russell Bryant mentioning that there is a Heat template
> for OVN. We are planning to use that. Any caveats/guides you would like to
> recommend for triple-O OVN integration? It would be really useful.
>
>
You need to include
/usr/share/openstack-triple-heat-templates/environments/neutron-ml2-ovn.yaml
[1] in the environment templates when calling openstack overcloud deploy
and it should work.
I would suggest using Ocata or master and OVS 2.7 in order to have a
successful deployment.
You can virt-customize your overcloud image and update ovs if you like.
There is a small script here [2] which does that. You can have a look into
it if you want.


Thanks
Numan



[1] -
https://github.com/openstack/tripleo-heat-templates/blob/master/environments/neutron-ml2-ovn.yaml

[2] -
https://github.com/numansiddique/overcloud_image_for_ovn/blob/master/build_ovn_oc_image.sh



> Thanks
> Pranab
>
>
> On May 24, 2017 23:38, "Numan Siddique" <nusiddiq at redhat.com> wrote:
>
>
>
> On Tue, May 23, 2017 at 6:48 PM, pranab boruah <
> pranabjyotiboruah at gmail.com> wrote:
>
>> Hi,
>> We are building a multi-node physical set-up of OpenStack Newton. The
>> goal is to finally integrate the set-up with OVN.
>> Lab details:
>> 1 Controller, 2 computes
>>
>> CentOS-7.3, OpenStack Newton, separate network for mgmt and tunnel
>> OVS version: 2.6.1
>>
>> I followed the following guide to deploy OpenStack Newton using the
>> PackStack utility:
>>
>> http://networkop.co.uk/blog/2016/11/27/ovn-part1/
>>
>> Before I started integrating with OVN, I made sure that the set-up(ML2
>> and OVS) was working by launching VMs. VMs on cross compute node were
>> able to ping each other.
>>
>> Now, I followed the official guide for OVN integration:
>>
>> http://docs.openstack.org/developer/networking-ovn/install.html
>>
>> Error details :
>> Neutron Server log shows :
>>
>>  ERROR networking_ovn.ovsdb.impl_idl_ovn [-] OVS database connection
>> to OVN_Northbound failed with error: '{u'error': u'unknown database',
>> u'details': u'get_schema request specifies unknown database
>> OVN_Northbound', u'syntax': u'["OVN_Northbound"]'}'. Verify that the
>> OVS and OVN services are available and that the 'ovn_nb_connection'
>> and 'ovn_sb_connection' configuration options are correct.
>>
>> The issue is ovsdb-server on the controller binds with the port
>> 6641.instead of 6640.
>>
>>
>
> Hi Pranab,
> Normally I have seen this happening when neutron-server (i.e the
> networking-ovn ML2 driver) tries to connect to the OVN northbound
> ovsdb-server (on port 6641) and fails (mainly because the OVN NB db
> ovsdb-server) is not running. In such case the code here [1] runs
> "ovs-vsctl add-connection ptcp:6640:.. which causes the main ovsdb-server
> (for conf.db) to listen on port 6641.
>
> Can you make sure that ovsdb-server's for OVN are running before the
> neutron-server is started.
>
> May be to see if it works you can run "ovs-vsctl del-manager" and then run
> netsat -putna | grep 6641 and verify that OVN NB db ovsdb-server listens on
> 6641.
>
> [1] - https://github.com/openstack/neutron/blob/stable/newton/
> neutron/agent/ovsdb/native/connection.py#L82
>       https://github.com/openstack/neutron/blob/stable/newton/
> neutron/agent/ovsdb/native/helpers.py#L41
>
> Thanks
> Numan
>
> #  netstat -putna | grep 6641
>>
>> tcp        0      0 192.168.10.10:6641      0.0.0.0:*
>> LISTEN      809/ovsdb-server
>>
>> # netstat -putna | grep 6640 (shows no output)
>>
>> Now, OVN NB DB tries to listen on port 6641, but since it is used by
>> the ovsdb-server, it's unable to. PID of ovsdb-server is 809, while
>> the pid of OVN NB DB is 4217.
>>
>> OVN NB DB logs shows this:
>>
>> 2017-05-23T12:58:09.444Z|01421|ovsdb_jsonrpc_server|ERR|ptcp:6641:0.0.0.0
>> :
>> listen failed: Address already in use
>> 2017-05-23T12:58:11.946Z|01422|socket_util|ERR|6641:0.0.0.0: bind:
>> Address already in use
>> 2017-05-23T12:58:14.448Z|01423|socket_util|ERR|6641:0.0.0.0: bind:
>> Address already in use
>>
>> Solutions I tried:
>> 1) Completely fresh installing everything.
>> 2) Tried with OVS 2.6.0 and 2.7, same issue on all.
>> 3) Checked  and verified : SB and NB configuration options in
>> plugin.ini are exactly correct.
>>
>> Please help. Let me know. if additional details are required.
>>
>> Thanks,
>> Pranab
>> _______________________________________________
>> discuss mailing list
>> discuss at openvswitch.org
>> https://mail.openvswitch.org/mailman/listinfo/ovs-discuss
>>
>
>
>
-------------- next part --------------
An HTML attachment was scrubbed...
URL: <http://mail.openvswitch.org/pipermail/ovs-discuss/attachments/20170531/9cb9f05e/attachment.html>


More information about the discuss mailing list