[ovs-discuss] SFC using OVN

Murali R muralirdev at gmail.com
Tue Nov 10 05:28:05 UTC 2015


In the old poc (march - may 2015) the way the service chains worked was a
custom flow programmed based on an nsh key per chain. The nsh header was
sort of the service chain key. The flow was programmed from ovs agent. For
instance you would have video compression and url filter on a chain. They
could be on same hypervisor or different one. The flows were built with
condition nsh header (and few more params) and action routed to next
destination using the following process (abstract).
- Input was 5 tuple + list of dict with IP address of vnf, prev-vnf and
next-vnf. For first, the prev was HEAD and last next was TAIL enums in the
input.
- In the neutron servicechain plugin I was able to create a linked-list of
VNFs using IP address chain list within a plugin. For each node I got the
ports, device id  & mac addresses from the neutron db.
- The enhanced linked-list was then broadcast to all agents (could be
optimized) and each agent identified itself in the chain and programmed the
flows for that node. If it didn't find itself, the agent would not alter
standard flows. If the prev had an IP (not HEAD enum) then there is ingress
path. So I routed the packets from br-int (I guess it was table 20 or 40
from br-tun patch), the next hop was programmed to the mac address within
the switch.
- A dedicated nsh tunnel was created (different port) if there was an
egress to next node, the packet was moved to egress table (I recall 40?) so
it went out to the next hypervisor. Note that unclassified traffic would go
though the standard vxlan tunnel.
- A custom flow definition was required from entry point classifier to the
first vnf in the chain
- A custom flow also routed all traffic from user end points to the
classifier mentioned above.
- Once the flows programmed, packets with that nsh header would traverse
that chain once the classifier puts the packets in this network.

For the poc, all the vnf were in the same network. Additional work was
needed to do the routing across networks but was not done due to time
constraint. The actual solution was a bit more detailed in impl than the
abstract above.

The issue with the solution was the nsh had to be re-classified every time
packet came out of a VNF because they (vnf) were not service aware. Also
there was a hack added to use another port & tunnel type in ovs by one of
my colleague to handle nsh header - so we were able to add an action to put
the value of nsh at an address upon classification. I had to enhance ovs
agent and kvm lib to pass these data through to the ovs.

The solution was not clean but worked. We were able to dynamically create
flows based on a servicechain input. Because the nsh was not traveling
through the switch if we had multiple VNFs on one hypervisor, this needed
multiple re-classification steps.

In the current run with OVN, a better way to program the flows will be
using ovn metadata within the OVN domain. Once it heads out (towards vtep),
one could use mpls tags or nsh headers. Because we already use metadata
info to manage networks, it is not essential to use nsh headers in OVN and
probably (not sure) can be routed without reclassification. However, in the
old poc the flows were programmed from individual hypervisors by ovs agent
so was more predictable. Now it has to be done by the ovn controller by
translating NB DB data and has to maintain the associations in lflow -
phy-flow creations, I have not yet figured out a way to do that, Besides
sfc, I have many other use cases for custom flow definitions. So any design
ideas welcome.
-------------- next part --------------
An HTML attachment was scrubbed...
URL: <http://openvswitch.org/pipermail/ovs-discuss/attachments/20151109/0c117428/attachment-0002.html>


More information about the discuss mailing list