[ovs-discuss] How instance get metadata with OVN

Daniel Alvarez Sanchez dalvarez at redhat.com
Mon Sep 25 08:06:16 UTC 2017


Hi Vikran

On Sat, Sep 23, 2017 at 8:22 AM, Vikrant Aggarwal <ervikrant06 at gmail.com>
wrote:

> Hi Folks,
>
> I am trying to understand how instance get metadata when OVN is used as
> mechanism driver. I read the theory on [1] but not able to understand the
> practical implementation of same.
>
> Created two private networks (internal1 and internal2), one private
> network (internal1) is created to router and other one (internal2) is
> isolated.
>
> I tried to spin the cirros instances using both networks. Both instances
> are able to get the metadata from networks.
>
> List of metadata related processes running on devstack node.
>
> ~~~
> stack at testuser-KVM:~/devstack$ ps -ef | grep -i metadata
> stack     1067     1  0 Sep22 ?        00:00:39 /usr/bin/python
> /usr/local/bin/networking-ovn-metadata-agent --config-file
> /etc/neutron/networking_ovn_metadata_agent.ini
> stack     1414  1067  0 Sep22 ?        00:00:17 /usr/bin/python
> /usr/local/bin/networking-ovn-metadata-agent --config-file
> /etc/neutron/networking_ovn_metadata_agent.ini
> stack     1415  1067  0 Sep22 ?        00:00:17 /usr/bin/python
> /usr/local/bin/networking-ovn-metadata-agent --config-file
> /etc/neutron/networking_ovn_metadata_agent.ini
> stack    25192     1  0 10:43 ?        00:00:00 haproxy -f
> /opt/stack/data/neutron/ovn-metadata-proxy/54f264d5-c2f5-
> 409c-9bd2-dbcec52edffd.conf
> stack    27424     1  0 11:24 ?        00:00:00 haproxy -f
> /opt/stack/data/neutron/ovn-metadata-proxy/86eefb22-1417-
> 407a-b56f-a1f3f147ee4e.conf
> ~~~
>
> Default content of neutron ovn metadata file.
>
> ~~~
> stack at testuser-KVM:~/devstack$ egrep -v "^(#|$)"
> /etc/neutron/networking_ovn_metadata_agent.ini
> [DEFAULT]
> state_path = /opt/stack/data/neutron
> metadata_workers = 2
> nova_metadata_ip = 192.168.122.98
> debug = True
> [ovs]
> ovsdb_connection = unix:/usr/local/var/run/openvswitch/db.sock
> [agent]
> root_helper_daemon = sudo /usr/local/bin/neutron-rootwrap-daemon
> /etc/neutron/rootwrap.conf
> [ovn]
> ovn_sb_connection = tcp:192.168.122.98:6642
> ~~~
>
> I don't see any NAT rule inside the network namespace which can route the
> request coming for "169.254.169.254" to nova metadata IP which is mentioned
> in ovn metadata configuration file.
>
> ~~~
> stack at testuser-KVM:~/devstack$ sudo ip netns list
> ovnmeta-86eefb22-1417-407a-b56f-a1f3f147ee4e (id: 1)
> ovnmeta-54f264d5-c2f5-409c-9bd2-dbcec52edffd (id: 0)
> stack at testuser-KVM:~/devstack$ sudo ip netns exec
> ovnmeta-86eefb22-1417-407a-b56f-a1f3f147ee4e iptables -t nat -L
> Chain PREROUTING (policy ACCEPT)
> target     prot opt source               destination
>
> Chain INPUT (policy ACCEPT)
> target     prot opt source               destination
>
> Chain OUTPUT (policy ACCEPT)
> target     prot opt source               destination
>
> Chain POSTROUTING (policy ACCEPT)
> target     prot opt source               destination
> ~~~
>
> Content of the haproxy configuration file.
>
> ~~~
> root at testuser-KVM:~/devstack# cat /opt/stack/data/neutron/ovn-
> metadata-proxy/86eefb22-1417-407a-b56f-a1f3f147ee4e.conf
>
> global
>     log         /dev/log local0 debug
>     user        stack
>     group       stack
>     maxconn     1024
>     pidfile     /opt/stack/data/neutron/external/pids/86eefb22-1417-
> 407a-b56f-a1f3f147ee4e.pid
>     daemon
>
> defaults
>     log global
>     mode http
>     option httplog
>     option dontlognull
>     option http-server-close
>     option forwardfor
>     retries                 3
>     timeout http-request    30s
>     timeout connect         30s
>     timeout client          32s
>     timeout server          32s
>     timeout http-keep-alive 30s
>
> listen listener
>     bind 0.0.0.0:80
>     server metadata /opt/stack/data/neutron/metadata_proxy
>     http-request add-header X-OVN-Network-ID 86eefb22-1417-407a-b56f-
> a1f3f147ee4e
> ~~~
>
> It seems like that isolate metadata option is enabled by default in my
> setup, but in neutron ovn configuration files I don't see such setting, I
> am suspecting it's enabled because when network is not connected to router
> even in that case instance spawned using isolated network able to get the
> metadata.
>

The way we implemented metadata in OVN is as ML2/OVS implements it for the
isolated networks case.
No matter if the network is connected to a router or not, metadata will be
served in OVN locally in each controller.
A metadata agent is running on each controller and one instance of haproxy
will be running for each network for
which that chassis is hosting a port in.



> How the instance is able to get metadata in both cases isolate network and
> network connected to router?
>

The way instances reach the metadata namespace is through a static route
(see the contents of the DHCP_Options table
and route command output from the instance itself) which is pushed via DHCP
so that when traffic is directed to 169.254.169.254,
instead of hitting the default route it will be directed to the IP address
of the metadata port for that network.
As I said, this is true regardless of the network being connected to a
router or not.

Hope this helps :)
Daniel



>
> [1] https://docs.openstack.org/networking-ovn/latest/
> contributor/design/metadata_api.html
>
>
> Thanks & Regards,
> Vikrant Aggarwal
>
>
> _______________________________________________
> discuss mailing list
> discuss at openvswitch.org
> https://mail.openvswitch.org/mailman/listinfo/ovs-discuss
>
>
-------------- next part --------------
An HTML attachment was scrubbed...
URL: <http://mail.openvswitch.org/pipermail/ovs-discuss/attachments/20170925/3263ca22/attachment-0001.html>


More information about the discuss mailing list