[ovs-discuss] Single-sided tunnel

Avi Cohen (A) avi.cohen at huawei.com
Thu Jan 26 15:56:53 UTC 2017


first try to set your bridge up
Ifconfig s1 up

From: ovs-discuss-bounces at openvswitch.org [mailto:ovs-discuss-bounces at openvswitch.org] On Behalf Of Rodrigo Ruas Oliveira
Sent: Thursday, 26 January, 2017 5:21 PM
To: ovs-discuss at openvswitch.org
Subject: [ovs-discuss] Single-sided tunnel

Hello all,

I'd like to verify if what I'm trying to do is possible.

I'm trying to use tunnels to perform load balancing, just as it's done with VLB. The idea is to send packets to an intermediary node which acts as proxy and deflects the packet to its intended destination.

As I understand, ovs does not support IP-in-IP, only GRE and VXLAN, is that correct?

So anyway, I've attempted to create a GRE interface within an OVS switch which encapsulates a packet with an outer IPv4 header and sends it to a router (which owns that IP). The router, then, should decapsulate the packet and send it to the next-hop using the inner IPv4 header.

I'm using the following topology on mininet:

                (192.168.255.1)    (192.168.255.4)
                              < R1 >
                             /      \
           < X1 > -- < OVS 1 > ----- < OVS 2 > -- < X2 >
(192.168.255.2)                                    (192.168.255.3)

I first validated the topology and configs by running L2Learning on both switches. This checked and everyone found everyone.

Next, I created a GRE port pointing to R1's IP on the R1-to-OVS1 interface (r1-eth0 -- s1-eth1) using:

sh ovs-vsctl add-port s1 s1-gre0 -- set interface s1-gre0 type=gre option:remote_ip=192.168.255.1

and then reconfigure OVS1 to forward packets from X1 to the s1-gre0 interface. The result is a blackhole, packets never leave S1. Is this expected? Should I be connecting s1-gre0 to s1-eth1 or another switch port internally somehow?

Bellow, there is a dump of OVS1's flow table and ports. Ports 1, 2, and 3 connect to, respectively,  R1, S2, and X1. Port 4 is the GRE port.

The list of MAC addresses are:
R1: c2:e7:ba:8e:c8:d8
X1: 4a:cb:6a:d6:78:84
X2: 7a:73:b3:8f:b0:bf

mininet> sh ovs-ofctl dump-flows s1
NXST_FLOW reply (xid=0x4):
 cookie=0x0, duration=1058.311s, table=0, n_packets=8, n_bytes=616, idle_age=1015, priority=2,in_port=3 actions=output:4
 cookie=0x0, duration=1058.311s, table=0, n_packets=0, n_bytes=0, idle_age=1058, priority=1,dl_dst=c2:e7:ba:8e:c8:d8 actions=output:1
 cookie=0x0, duration=1058.311s, table=0, n_packets=3, n_bytes=238, idle_age=1015, priority=1,dl_dst=4a:cb:6a:d6:78:84 actions=output:3
 cookie=0x0, duration=1058.311s, table=0, n_packets=2, n_bytes=84, idle_age=1006, priority=1,dl_dst=7a:73:b3:8f:b0:bf actions=output:2
 cookie=0x0, duration=1058.312s, table=0, n_packets=3, n_bytes=126, idle_age=1006, priority=3,dl_dst=ff:ff:ff:ff:ff:ff actions=ALL

mininet> sh ovs-ofctl show s1
OFPT_FEATURES_REPLY (xid=0x2): dpid:0000000000000001
n_tables:254, n_buffers:256
capabilities: FLOW_STATS TABLE_STATS PORT_STATS QUEUE_STATS ARP_MATCH_IP
actions: OUTPUT SET_VLAN_VID SET_VLAN_PCP STRIP_VLAN SET_DL_SRC SET_DL_DST SET_NW_SRC SET_NW_DST SET_NW_TOS SET_TP_SRC SET_TP_DST ENQUEUE
 1(s1-eth1): addr:96:6e:1e:4e:d4:c3
     config:     0
     state:      0
     current:    10GB-FD COPPER
     speed: 10000 Mbps now, 0 Mbps max
 2(s1-eth2): addr:e6:06:80:f5:31:63
     config:     0
     state:      0
     current:    10GB-FD COPPER
     speed: 10000 Mbps now, 0 Mbps max
 3(s1-eth3): addr:ca:b1:3e:7d:f4:92
     config:     0
     state:      0
     current:    10GB-FD COPPER
     speed: 10000 Mbps now, 0 Mbps max
 4(s1-gre0): addr:de:10:c3:ae:d3:40
     config:     0
     state:      0
     speed: 0 Mbps now, 0 Mbps max
 LOCAL(s1): addr:da:9e:08:55:b7:45
     config:     PORT_DOWN
     state:      LINK_DOWN
     speed: 0 Mbps now, 0 Mbps max
OFPT_GET_CONFIG_REPLY (xid=0x4): frags=normal miss_send_len=0

-- Rodrigo


-------------- next part --------------
An HTML attachment was scrubbed...
URL: <http://mail.openvswitch.org/pipermail/ovs-discuss/attachments/20170126/586d3d74/attachment.html>


More information about the discuss mailing list