[ovs-discuss] OpenVswitch

Chris Boley ilgtech75 at gmail.com
Sun Feb 25 20:31:51 UTC 2018


Grant, truly awesome insights. Thank you. I'll be putting them to good use.
Thanks a bunch for your time and consideration on this.

As for the servers I'm working with: "SUPERMICRO SYS-5018A-MHN4 1U
Rackmount Server"
I'm seeing it on NewEgg right now for $729 You would of course need to add
your own drives and RAM.
Supermicro is now making a 5019 model. But I couldn't find that newer model
that shipped with a case that had multiple drive bays.
I supposed at that point if you're dealing with 12 cores you may want a
separate storage device to hook up to anyhow. To each his own. ;-)
I digress, this isn't a forum to blabber about hardware. If you want to
know more you can reach out to me individually and I'd be happy to
entertain any questions you may have.
Thanks again for the insights!

Chris Boley

On Sun, Feb 25, 2018 at 3:01 PM, Grant Taylor via discuss <
ovs-discuss at openvswitch.org> wrote:

> On 02/25/2018 07:54 AM, Chris Boley wrote:
>
>> Sorry Grant, I think I replied directly to you the first time around. I
>> forgot the Reply All.
>>
>
> Things happen.
>
> Correct, *INLINE_IPS* filtering and dropping traffic. To the IPS VM it
>> would look like two ethernet interfaces that were members of a transparent
>> bridge.
>>
>
> Well, /almost/ transparent bridge.  ;-)
>
> Here goes.. apologies for the lengthy explanation:
>>
>> It all comes down to port count..I don’t want to run any more hardware
>> than necessary.
>>
>
> Okay.
>
> I think I was assuming that you would be bringing a physical port in,
> adding it to a bridge, which would then have another port connected to the
> VM.  Thus the inquiry about bypassing the bridging and just connecting the
> physical port to the VM directly.
>
> …continuing to read your reply…
>
> I’m trying to do a concept build for an All in 1 kind of thing. It hinges
>> around virtualization of a couple OS’s.  This is meant for a SOHO
>> environment with 25-50 users where load wouldn’t likely be much of an issue
>> at all.
>>
>
> ACK
>
> HOST: I’m running a supermicro server with Ubuntu 16.04 / libvirt based
>> hypervisor, an 8 core Atom based proc, 32 gigs of ram 4 built in Intel
>> nic’s onboard.
>>
>
> I'd like to know more details about the Supermicro server as it may fit
> desires that I have.  ;-)
>
> GUESTS: 5 VM’s were planned to run as guests on this hardware. One Ubuntu
>> guest running LogAnalyzer/syslog server, one Ubuntu guest running NFSEN,
>> one vyOS router, and a Suricata based IPS porting it’s logs out via Syslog
>> to the LogAnalyzer instance. Lastly a tiny instance of Ubuntu running
>> Arpwatch to detect / log new hosts plugging into a customer network.
>>
>
> Okay.  I think I track most of that.  I'm having trouble placing NFSEN at
> the moment.  I want to say I ran across it while messing with NetFlow years
> ago.
>
> …reading…
>
> There are 4 intel nic’s that come onboard with the server. (My desire is
>> to avoid adding NIC’s.)
>>
>
> Fair.
>
> There would be 2 OVS logical bridges vbridge0 and vbridge1
>>
>> One nic was to go to the mgmt of the host. 2 nics were to be bridged /
>> linked to separate trunk ports on a Cisco 48 port 3750G to handle all
>> traffic going to the guests I mentioned and internal user traffic connected
>> to the vbridge1. The 3750g would handle access layer for physical computers
>> running on the LAN.
>>
>
> Unless I'm missing something, that's three ports and vbridge1.
>
> I'm not seeing the 4th port or vbridge0 yet.
>
>>
> The internal VM guests including the IPS would have nics that would be tap
>> interfaces. Like this: ovs-vsctl add-port vswitch0 vnic0 tag=10 --
>> ovs-vsctl set interface vnic0 type=internal (vnic0 being the tap IF).
>>
>
> Okay.
>
> Aside:  Do you need the second ovs-vsctl?  I thought you could run all of
> that as one command with the double dash separating commands to ovs-vsctl.
> (Maybe that's what you're doing and it didn't translate well to email.)
>
> The IPS will be a transparent bridge between the Logical OVS bridges. (a
>> bridge within a bridge within a bridge ;D ) <== I might run into hiccups
>> here as I don't know how OVS will react to a VM bridging two logical
>> bridges??
>>
>
> I'm still not seeing vbirdge0 connected to anything other than the output
> of the IPS VM.
>
> I don't feel like we've gone quite as far as Inception.  Maybe we need to
> give the switch stack a kick.  ;-)
>
> Tap interfaces comes up via the /etc/network/interfaces   on the host.
>> (the ones coming into the IPS would be tuned the same for the driver
>> manipulation and the rest would be basic) (I’d also have to do the same for
>> the virtio nics in the IPS VM to get continuity of nic behavior in all
>> areas of the connectivity)
>>
>
> Okay.
>
> The last host port was planned to be bridged directly to the WAN interface
>> meant for the WAN side of the vyOS router so I could plug it into the WAN
>> directly for sake of ease.
>>
>
> Is this where vbridge0 comes into play?
>
> If vbridge0 is just to connect eth0 to the VyOS guest VM, i.e. the
> following ASCII diagram, this is where I'm suggesting foregoing the bridge.
>
> [eth0]---[vbridge0]---[VyOS]
>
> I'd be inclined to remove the bridge and just do this directly:
>
> [eth0]---[VyOS]
>
> Why does [vbridge0] need to be there?  Is it doing anything other than
> enabling what is effectively a point-to-point connection between eth0 and
> VyOS?
>
> Are you wanting to leverage other features of OVS, i.e. insturmentation,
> of the data flowing between eth0 and VyOS?
>
> My concept is my own hair brained idea but I think it can be done without
>> too much headache. Once perfected it would just be an install process. I
>> just want to make sure that as traffic comes into the host, that the host
>> nic driver isn’t modifying the packets before they hit the IPS. This may
>> not even work but if it does, I could potentially save a bunch of money in
>> discreet server costs and cut down on power use immensely. A single
>> supermicro server like this with 6 terabytes of raid 5 space is less than
>> 1500 USD. A cisco 3750G is cheap these days. Or upgrade to a 2960-X which
>> is a defacto standard Cisco Access layer switch these days.
>>
>> I could hand off an entire fully capable enterprise class router/traffic
>> shaper/vpn capable/ips capable/netflow capable network server for a couple
>> of thousand and sell maintenance / monitoring of said systems monthly. So
>> that’s the WHY in this whole thing. Small office environments are my target
>> market.  Sorry….. too much information??
>>
>
> I hear you.  I've done some similar things in the past.
>
> No, it's not too much information.
>
> One thing that I don't have a good mental picture of yet is how the
> various VMs are going to be interconnected.  I believe this is the VM list
> extracted from above.
>
> 1)  One Ubuntu guest running LogAnalyzer/syslog server
> 2)  one Ubuntu guest running NFSEN
> 3)  one vyOS router
> 4)  Suricata based IPS
> 5)  a tiny instance of Ubuntu running Arpwatch
>
> IMHO #1, #2, and #5 are management / housekeeping type function VMs that
> don't actually impact traffic flow, read out of the data path. Conversely,
> #3 and #4 are actually in your data path and controlling traffic.
>
> Based on other context of this thread, it sounds like you will have an
> architecture that resembles something like this:
>
> [eth0]---[vbridge0]---[VyOS]---[IPS]---[vbridge1]---[VM #1]
> [eth1]---------------------------------[        ]---[VM #2]
> [eth2]---------------------------------[        ]---[VM #5]
> [eth3]---[Ubuntu VM host]
>
> Assuming that this is even remotely correct, I still feel like vbridge0 is
> unnecessary.
>
> [eth0]----------------[VyOS]---[IPS]---[vbridge1]---[VM #1]
> [eth1]---------------------------------[        ]---[VM #2]
> [eth2]---------------------------------[        ]---[VM #5]
> [eth3]---[Ubuntu VM host]
>
> Here's how crazy I'd be inclined to do things if I were building this:
>
> Bond all four of the ports together with LACP (802.3ad) and then carry
> VLAN tags (802.1q) across said bond and create sub-interfaces bond0.$VID.
>
> That would:
>  - Ensure that you don't fail on the single WAN link (eth0) failing.
>  - Enable you to support multiple WAN links via VLAN tagged
> sub-interfaces.  Thus enabling connections to multiple ISPs.
>  - Means that all of the ports on the server are the same.  No need to
> differentiate.
>  - Management could be untagged / native or a specific VLAN.
>
> I'd also do some playing with bonding to see if there's a way to get
> connectivity even when LACP (802.3ad) is not active.  -  I don't know if
> this is possible per say or if it would take some other special magic.
>
> [eth0]---[bond0]---[vlan1]---------------[VyOS]---[vEth0]
> [eth1]---[     ]---[vlan2]---------------[    ]   [     ]
> [eth2]---[     ]---[vlan3]---------------[    ]   [     ]
> [eth3]---[     ]---[vlan4]---[bridge0]---[IPS]----[     ]
>          [     ]             [       ]---[VM #1]
>          [     ]---[admin]   [       ]---[VM #2]
>                              [       ]---[VM #5]
>
> Note:  I would use a vEth between VyOS and the IPS as they are purportedly
> slightly faster than kernel or OVS bridges.  -  You may prefer to use an
> additional VLAN in OVS.
>
> https://www.opencloudblog.com/?p=240
>> https://bugs.launchpad.net/ubuntu/+source/openvswitch/+bug/1448254
>>
>> After reviewing those two links I came to mostly the same conclusion
>> albeit I haven’t had a chance to put that theory to the test.
>>
>
> Fair.  I could have easily been wrong and you correct in that you needed
> to account for more issues.
>
> I’ll have to set this up and do the testing to be sure even for myself.
>> Thanks again for your input!
>>
>
> You're welcome.  Good luck!
>
>
>
>
> --
> Grant. . . .
> unix || die
>
>
> _______________________________________________
> discuss mailing list
> discuss at openvswitch.org
> https://mail.openvswitch.org/mailman/listinfo/ovs-discuss
>
>
-------------- next part --------------
An HTML attachment was scrubbed...
URL: <http://mail.openvswitch.org/pipermail/ovs-discuss/attachments/20180225/e779db89/attachment-0001.html>


More information about the discuss mailing list