[ovs-discuss] OVS-IPsec - Network going down.

Rajak, Vishal Vishal.Rajak at commscope.com
Tue Nov 26 12:17:17 UTC 2019


Hi,

We are trying to bring up IPsec over vxlan between two nodes of openstack cluster in our lab environment.

Note: There are only 2 nodes in cluster(Compute and controller).

Following are the steps followed to bring-up ovs  with ipsec

Link:  http://docs.openvswitch.org/en/latest/tutorials/ipsec/

Commands used on Controller node: (IP - 10.2.2.1)

a. dnf install python2-openvswitch libreswan \
              "kernel-devel-uname-r == $(uname -r)"

b. yum install python-openvswitch   - to install python-openvswitch-2.11.0-4.el7.x86_64 as it has support for ipsec.

c. Download openvswitch 2.11 version rpms and put it in on the server
d. Install openvswitch rpms in the server

     ex- rpm -ivh openvswitch-ipsec-2.11.0-4.el7.x86_64.rpm  -- to install ovs-ipsec rpm

e. iptables -A INPUT -p esp -j ACCEPT
f.  iptables -A INPUT -p udp --dport 500 -j ACCEPT

g. cp -r /usr/share/openvswitch/ /usr/local/share/
h. systemctl start openvswitch-ipsec.service

I. ovs-vsctl add-port br-ex ipsec_vxlan -- set interface ipsec_vxlan type=vxlan options:remote_ip=10.2.2.2 options:psk=swordfish



Commands used on compute node: (IP -10.2.2.2)

Link:  https://devinpractice.com/2016/10/18/open-vswitch-introduction-part-1/

a. ovs-vsctl add-br br-ex

b. ip link set br-ex up

c. ovs-vsctl add-port br-ex enp1s0f1

d. ip addr del 10.2.2.2/24 dev enp1s0f1

e. ip addr add 10.2.2.2/24 dev br-ex

f. ip route add default via 10.2.2.254 dev br-ex

g. Same step as done for controller node above for ipsec configuration.

 After bringing up ipsec in compute node the connectivity for all 10.2.2.0 network went down.

Following are the steps followed to resolve the issue of 10.2.2.0.network down due to creation of  bridge in compute node

  1.  Replicated the output of ovs-vsctl show in compute node by comparing the output of controller node by executing following commands

             1. ovs-vsctl set-controller br-ex tcp:127.0.0.1:6633

             2. ovs-vsctl - set Bridge br-ex fail-mode=secure

    After running the above 2 commands the network connectivity to outside network from compute node went down and other servers came up.

3. ovs-vsctl  add-port br-ex phy-br-ex - set interface phy-br-ex type=patch options:peer=int-br-ex

 4. ovs-vsctl  add-port br-int int-br-ex - set interface  int-br-ex type=patch options:peer=phy-br-ex

                After running the above 2 commands also the compute node was not reachable to outside network.

  1.  Compared the files in network-script in both compute node and controller node and found some difference.

    Compute node didn't had ifcfg-br-ex file, so added it in compute node. Made some changes in ifcfg-enp1s0f1 file after comparing it with the same file present in controller node.

     d. Restarted the network service.

     e. After restarting the network service the changes which was made in ovs-vsctl was removed and only the bridge br-ex which was created on physical interface remained.

     f. The compute node started ping the outer network as well.

     g. Ran command to establish ipsec-vsxlan.

          ovs-vsctl add-port br-ex ipsec_vxlan -- set interface ipsec_vxlan type=vxlan options:remote_ip=10.2.2.1 options:psk=swordfish

     h. After port was added for ipsec-vxlan the network again went down.

     I. Removed the ipsec-vxlan port.

     j. Now the compute node has bridge over physical interface and its pinging to outside network as well.

    k. Tried pinging from VM in compute node to VM in controlled node. Ping didn't work.

       1. Removed the VM form compute node and tried creating another instance. Creation of another instance failed.

       2. Debugged the issue and found out that  was not running.

       3. Started neutron-openvswitch-agent again. After starting neutron-openvswitch-agent the creation of VM was successful.

       4. Still the VMs are not pinging.

    l. Compared /etc/neutron/plugins/ml2/openvswitch_agent.ini in both controller and compute node and found some difference. After resolving those difference and restarting neutron-openvswitch-  agent  the phy-br-ex port, controller tcp:127.0.0.1:6633 and fail-mode=secure automatically got added in Bridge br-ex.



Still the VMs are not pinging.

Ipsec is not established and VMs are not pinging.

Regards,
Vishal.
-------------- next part --------------
An HTML attachment was scrubbed...
URL: <http://mail.openvswitch.org/pipermail/ovs-discuss/attachments/20191126/f90f0584/attachment-0001.html>


More information about the discuss mailing list