[ovs-discuss] Mirror through GRE on DPDK ports not forwarding.
David Evans
davidjoshuaevans at gmail.com
Tue Dec 16 20:24:48 UTC 2014
Hi All,
I¹m setting up a GRE tunnel on an ovs bridge. (trunk ovs and dpdk 1.7.1)
I have two bridges with one DPDK interface each.
Traffic generator -> DPDK0 -> B0(bridge) -> GRE0 (172.168.1.4)
B1 (172.168.1.1) -> DPDK1 -> host 172.168.1.3 (GRE termination)
When I set up GRE as below and push packets in DPDK0 , arp requests for who
is 172.168.1.3 come out DPDK1 and I see them on the remote host OK So far.
And I can see replies coming back to DPDK1 from the remote host.
But the original packets to be mirrored don¹t make it through the tunnel.
Some setup features follow: (routes, gre port setup, etc)
./ovs-vsctl add-port b0 gre0 -- set Interface gre0 type=gre
options:remote_ip=172.168.1.3 options:local_ip=172.168.1.4 -- --id=@p get
port gre0 -- --id=@m create mirror name=m0 select-all=true output-port=@p --
set bridge b0 mirrors=@m
ifcoonfig b1 172.168.1.1 netmask 255.255.255.0
[root at localhost utilities]# route -n
Kernel IP routing table
Destination Gateway Genmask Flags Metric Ref Use
Iface
0.0.0.0 10.218.160.1 0.0.0.0 UG 1024 0 0
ens2f1
10.218.160.0 0.0.0.0 255.255.240.0 U 0 0 0
ens2f1
172.168.1.0 0.0.0.0 255.255.255.0 U 0 0 0 b1
[root at localhost utilities]# ./ovs-appctl ovs/route/show
Route Table:
Cached: 10.218.169.14/32 dev ens2f1
Cached: 127.0.0.1/32 dev lo
Cached: 172.168.1.1/32 dev b1
Cached: 192.168.122.1/32 dev virbr0
Cached: 172.168.1.0/24 dev b1
Cached: 10.218.160.0/20 dev ens2f1
Cached: 127.0.0.0/8 dev lo
Cached: 0.0.0.0/0 dev ens2f1 GW 10.218.160.1
afbc043c-ec1a-4050-a1c0-8f0b669f93e6
Bridge "b0"
Port "dpdk0"
Interface "dpdk0"
type: dpdk
Port "gre0"
Interface "gre0"
type: gre
options: {local_ip="172.168.1.4", remote_ip="172.168.1.3"}
Port "b0"
Interface "b0"
type: internal
Bridge "b1"
Port "dpdk1"
Interface "dpdk1"
type: dpdk
Port "b1"
Interface "b1"
type: internal
[root at localhost utilities]# ./ovs-ofctl show b0
OFPT_FEATURES_REPLY (xid=0x2): dpid:000000e0ed1fe4e8
n_tables:254, n_buffers:256
capabilities: FLOW_STATS TABLE_STATS PORT_STATS QUEUE_STATS ARP_MATCH_IP
actions: output enqueue set_vlan_vid set_vlan_pcp strip_vlan mod_dl_src
mod_dl_dst mod_nw_src mod_nw_dst mod_nw_tos mod_tp_src mod_tp_dst
1(dpdk0): addr:00:e0:ed:1f:e4:e8
config: 0
state: 0
current: 10GB-FD
supported: 100MB-FD 1GB-FD 10GB-FD FIBER AUTO_PAUSE
speed: 10000 Mbps now, 10000 Mbps max
2(gre0): addr:8e:f5:f5:95:ac:25
config: 0
state: 0
speed: 0 Mbps now, 0 Mbps max
LOCAL(b0): addr:00:e0:ed:1f:e4:e8
config: PORT_DOWN
state: LINK_DOWN
current: 10MB-FD COPPER
speed: 10 Mbps now, 0 Mbps max
OFPT_GET_CONFIG_REPLY (xid=0x4): frags=normal miss_send_len=0
[root at localhost utilities]# ./ovs-ofctl show b1
OFPT_FEATURES_REPLY (xid=0x2): dpid:000000e0ed1fe4e9
n_tables:254, n_buffers:256
capabilities: FLOW_STATS TABLE_STATS PORT_STATS QUEUE_STATS ARP_MATCH_IP
actions: output enqueue set_vlan_vid set_vlan_pcp strip_vlan mod_dl_src
mod_dl_dst mod_nw_src mod_nw_dst mod_nw_tos mod_tp_src mod_tp_dst
1(dpdk1): addr:00:e0:ed:1f:e4:e9
config: 0
state: 0
current: 10GB-FD
supported: 100MB-FD 1GB-FD 10GB-FD FIBER AUTO_PAUSE
speed: 10000 Mbps now, 10000 Mbps max
LOCAL(b1): addr:00:e0:ed:1f:e4:e9
config: 0
state: 0
current: 10MB-FD COPPER
speed: 10 Mbps now, 0 Mbps max
OFPT_GET_CONFIG_REPLY (xid=0x4): frags=normal miss_send_len=0
-------------- next part --------------
An HTML attachment was scrubbed...
URL: <http://openvswitch.org/pipermail/ovs-discuss/attachments/20141216/d599996d/attachment-0002.html>
More information about the discuss
mailing list