[ovs-discuss] Openvswitch and LXC integration on Ubuntu 18.04

densha at exemail.com.au densha at exemail.com.au
Mon May 14 10:55:22 UTC 2018


Paul

Thanks for that command.  I tried it and found that my br-int was not up .

After  "sudo ip link set br-int up" and "sudo ip addr add 192.168.1.1/24
dev br-int" it worked and I could ping as expected.

For Ubuntu 18.04 I have added the following to /etc/network/interfaces

allow-ovs br-int
iface br-int inet static
    address 192.168.1.1
    netmask 255.255.255.0
    ovs_type OVSBridge

But on reboot br-int is not coming up correctly after reboot.

5: br-int: <BROADCAST,MULTICAST> mtu 1500 qdisc noop state DOWN group
default qlen 1000
    link/ether c6:8e:e2:7b:0f:4f brd ff:ff:ff:ff:ff:ff

Is this the correct way to define a Openvswitch bridge with IP on Ubuntu?


Thanks

Densha

It looks> Before you rebuild, I suggest you ping at the interval of 0.01,
then, take
> "ovs-ofctl dump-flows br-int" and save it to a file. The relevant columns
> are table and n_packets. Wait a couple of seconds, then take the dump
> again. Compare and find the entries that increment at the rate of your
> ping.
>
> If you don't see the hits in the tables - check iptables, kmod, etc.
>
> If you ser them, use trace to figure out why your traffic is being
> dropped.
>
> Regards,
> Paul
>
>
> Get Outlook for iOS<https://aka.ms/o0ukef>
> ________________________________
> From: ovs-discuss-bounces at openvswitch.org
> <ovs-discuss-bounces at openvswitch.org> on behalf of densha at exemail.com.au
> <densha at exemail.com.au>
> Sent: Saturday, May 12, 2018 11:45:57 PM
> To: Orabuntu-LXC
> Cc: ovs-discuss at openvswitch.org
> Subject: Re: [ovs-discuss] Openvswitch and LXC integration on Ubuntu 18.04
>
> Thanks.  I tried that and still unable to ping from the LXC container to
> the IP address set on the bridge.
>
> I will rebuild everything from scratch and retry.
>
>> Check sysctl settings.  Check/set these on the LXC host machine in the
>> /etc/sysctl.conf (or in a new file in the /etc/sysctl.d directory, e.g
>> you
>> could call it  /etc/sysctl.d/60-lxc.conf) :
>>
>> net.ipv4.conf.default.rp_filter=0
>> net.ipv4.conf.all.rp_filter=0
>> net.ipv4.ip_forward=1
>>
>> Reference:
>> https://thenewstack.io/solving-a-common-beginners-problem-when-pinging-from-an-openstack-instance/
>>
>>
>>
>> On Sat, May 12, 2018 at 7:09 AM, <densha at exemail.com.au> wrote:
>>
>>> Thanks for the response and links.  I will watch the OvS Con videos.
>>>
>>> I have now successfully started the container, but unable to ping out
>>> or
>>> into the container.
>>>
>>> I have modified my /var/lib/vm1/conf to be
>>>
>>> # Network configuration
>>> lxc.net.0.type = veth
>>> lxc.net.0.link = br-int     <- Name of my internal container bridge
>>> lxc.net.0.flags = up
>>> lxc.net.0.name=eth0
>>> lxc.net.0.hwaddr = 00:16:3e:d2:23:a8 .    <- This was in the conf when
>>> created.
>>>
>>>
>>> When I start the container - I can see the port be added to the bridge
>>> on
>>> the host system
>>>
>>> # sudo lxc-start -n vm1
>>> # sudo ovs-vsctl show
>>> c3d9247e-68f1-4ae1-be0e-4bb86fd2c541
>>>     Bridge br-dmz
>>>         Port br-dmz
>>>             Interface br-dmz
>>>                 type: internal
>>>     Bridge br-int
>>>         Port "veth4U4B0B"                  <- New port added when
>>> container starts
>>>             Interface "veth4U4B0B"
>>>         Port br-int
>>>             Interface br-int
>>>                 type: internal
>>>         Port "enp2s0"
>>>             Interface "enp2s0"
>>>     ovs_version: "2.9.0"
>>>
>>> The bridge br-int has self IP 192.168.10.1/24 - that I added using
>>> (after
>>> reboot)
>>>
>>> # sudo ip addr del 192.168.0.1/24 dev br-int
>>>
>>> 5: br-int: <BROADCAST,MULTICAST> mtu 1500 qdisc noop state DOWN group
>>> default qlen 1000
>>>     link/ether 00:01:80:82:f8:59 brd ff:ff:ff:ff:ff:ff
>>>     inet 192.168.10.1/24 scope global br-int
>>>        valid_lft forever preferred_lft forever
>>>
>>> and the new port
>>>
>>> 8: veth4U4B0B at if7: <BROADCAST,MULTICAST,UP,LOWER_UP> mtu 1500 qdisc
>>> noqueue master ovs-system state UP group default qlen 1000
>>>     link/ether fe:b8:87:1b:1e:5e brd ff:ff:ff:ff:ff:ff link-netnsid 0
>>>     inet6 fe80::fcb8:87ff:fe1b:1e5e/64 scope link
>>>        valid_lft forever preferred_lft forever
>>>
>>> Inside the container I set the IP of eth0 device using
>>>
>>> ubuntu at vm1:~$ sudo ip addr add 192.168.10.2/24 dev eth0
>>>
>>> ubuntu at vm1:~$ ip a
>>> 7: eth0 at if8: <BROADCAST,MULTICAST,UP,LOWER_UP> mtu 1500 qdisc noqueue
>>> state UP group default qlen 1000
>>>     link/ether 00:16:3e:d2:23:a8 brd ff:ff:ff:ff:ff:ff link-netnsid 0
>>>     inet 192.168.10.2/24 scope global eth0
>>>        valid_lft forever preferred_lft forever
>>>     inet6 fe80::216:3eff:fed2:23a8/64 scope link
>>>        valid_lft forever preferred_lft forever
>>>
>>> However I still cannot ping the self IP of the bridge.
>>>
>>> Is there anything obvious that I have configured wrong?
>>>
>>> Thanks
>>>
>>> Densha
>>>
>>>
>>> > These materials might help:
>>> >
>>> > 1.  Presentation on running LXC on OpenvSwitch at OvS Con:
>>> >
>>> > https://www.youtube.com/watch?v=MXewSiDvQl4&t=221s (presentation I
>>> gave
>>> at
>>> > OvS Con).
>>> >
>>> > I discuss in the preso that for LXC 2.1+, you now have the option to
>>> > configure OpenvSwitch for LXC in two different ways.  You can
>>> configure
>>> it
>>> > using, as you mentioned, the scripts (and this was the way we had to
>>> do
>>> it
>>> > in LXC 1.0.x and  2.0.x.  This method has advantage that VLAN's can
>>> also
>>> > be
>>> > configured pretty easily in these scripts too.
>>> >
>>> > lxc.net.0.script.up
>>> > lxc.net.0.script.down
>>> >
>>> > Or, starting from 2.1.x you can also configure it directly in the LXC
>>> > config using for example these parameters:
>>> >
>>> >   lxc.net.0.type = veth
>>> >   lxc.net.0.link = ovsbr0
>>> >   lxc.net.0.flags = up
>>> >   lxc.net.0.name = eth0
>>> >
>>> > which is also discussed here:
>>> > https://discuss.linuxcontainers.org/t/lxc-2-1-has-been-released/487
>>> >
>>> > 2.  Also, my Orabuntu-LXC software projects is specifically designed
>>> for
>>> > deploying an entire LXC VLAN-tagged infrastructure on OpenvSwitch
>>> with
>>> > just
>>> > a single command:
>>> >
>>> > https://github.com/gstanden/orabuntu-lxc
>>> >
>>> > See if these references above help you set it up, and if not, let me
>>> know.
>>> >
>>> > HTH, Gilbert
>>> >
>>> >
>>> >
>>> > On Sat, May 12, 2018 at 2:32 AM, <densha at exemail.com.au> wrote:
>>> >
>>> >>
>>> >> I am attempting to use LXC containers with OpenVswitch on Ubuntu
>>> 18.04
>>> >> LTS
>>> >> server.  However, I am unable to work out the syntax for the
>>> container
>>> >> settings.  The container is failing to start due to unable to create
>>> the
>>> >> network.
>>> >>
>>> >> I did a vanilla install onto a media play with two NIC cards -
>>> enp1s0
>>> >> and
>>> >> enp2s0.
>>> >>
>>> >> I installed, created, tested and then destroyed a container using
>>> lxc
>>> to
>>> >> confirm that lxc was functioning correctly on the server.
>>> >>
>>> >> #sudo apt-get install lxc lxc-templates wget bridge-utils
>>> >> #sudo lxc-checkconfig
>>> >> #sudo lxc-create -n vm1 -t ubuntu
>>> >> #sudo lxc-start -n vm1
>>> >> #sudo lxc-console -n vm1
>>> >> #sudo lxc-stop -n vm1
>>> >> #sudo lxc-destroy -n vm1
>>> >>
>>> >> I then removed lxc bridge - lxcbr0 by setting USE_LXC_BRIDGE to
>>> false
>>> in
>>> >> /etc/default/lxc-net and removed lxcbr0 device and rebooted.
>>> >>
>>> >> # sudo ip link set lxcbr0 down
>>> >> # sudo brctl delbr lxcbr0
>>> >>
>>> >> I then installed openvswitch and created two bridges br-dmz (dmz
>>> >> containers - 172.18.0.0/24) and br-int (internal containers -
>>> >> 192.168.0.0/24).  I have added physical NIC port enp2s0 to br-int as
>>> I
>>> >> have a local WAP installed on that interface.
>>> >>
>>> >> #sudo apt-get install openvswitch-switch
>>> >> #sudo ovs-vsctl add-br br-dmz
>>> >> #sudo ovs-vsctl add-br br-int
>>> >> #sudo ovs-vsctl add-port br-int enp2s0
>>> >>
>>> >> #sudo ip addr add 172.18.0.1/24 dev br-dmz
>>> >> #sudo ip addr add 192.168.10.1/24 dev br-int
>>> >>
>>> >> #sudo ovs-vsctl show
>>> >> c3d9247e-68f1-4ae1-be0e-4bb86fd2c541
>>> >>     Bridge br-dmz
>>> >>         Port br-dmz
>>> >>             Interface br-dmz
>>> >>                 type: internal
>>> >>     Bridge br-int
>>> >>         Port br-int
>>> >>             Interface br-int
>>> >>                 type: internal
>>> >>         Port "enp2s0"
>>> >>             Interface "enp2s0"
>>> >>     ovs_version: "2.9.0"
>>> >>
>>> >> #ip a
>>> >>
>>> >> 5: br-dmz: <BROADCAST,MULTICAST> mtu 1500 qdisc noop state DOWN
>>> group
>>> >> default qlen 1000
>>> >>     link/ether 7e:86:2a:79:24:4e brd ff:ff:ff:ff:ff:ff
>>> >>     inet 172.18.0.1/24 scope global br-dmz
>>> >>        valid_lft forever preferred_lft forever
>>> >> 6: br-int: <BROADCAST,MULTICAST> mtu 1500 qdisc noop state DOWN
>>> group
>>> >> default qlen 1000
>>> >>     link/ether 00:01:80:82:f8:59 brd ff:ff:ff:ff:ff:ff
>>> >>     inet 192.168.10.1/24 scope global br-int
>>> >>        valid_lft forever preferred_lft forever
>>> >>
>>> >>
>>> >> I created a LXC container VM1 and I would like to attach to br-int
>>> >>
>>> >> sudo lxc-create -n vm1 -t ubuntu
>>> >>
>>> >> Edit VMs config vi /var/lib/lxc/vm1/config
>>> >>
>>> >> lxc.net.0.link = br-int    <- from lxcbr0
>>> >> lxc.net.0.script.up=/etc/lxc/ifup       <- added
>>> >> lxc.net.0.script.down=/etc/lxc/ifdown   <- added
>>> >>
>>> >> Created scripts to ifup / ifdown interface
>>> >>
>>> >> vi /etc/lxc/ifup
>>> >> #!/bin/bash
>>> >> BRIDGE=br-int
>>> >> ovs-vsctl --may-exist add-br $BRIDGE
>>> >> ovs-vsctl --if-exists del-port $BRIDGE $5
>>> >> ovs-vsctl --may-exist add-port $BRIDGE $5
>>> >>
>>> >> vi /etc/lxc/ifdown
>>> >> #!/bin/bash
>>> >> ovsBr=br-int
>>> >> ovs-vsctl --if-exists del-port ${ovsBr} $5
>>> >>
>>> >> chmod +x /etc/lxc/if*
>>> >>
>>> >> When I try to start the container using openvswitch I get the
>>> following
>>> >> error.
>>> >>
>>> >> sudo lxc-start -n vm1 --logfile log.txt
>>> >>
>>> >> lxc-start vm1 20180512072653.582 ERROR    lxc_conf -
>>> >> conf.c:run_buffer:347
>>> >> - Script exited with status 1
>>> >> lxc-start vm1 20180512072653.610 ERROR    lxc_network -
>>> >> network.c:lxc_create_network_priv:2436 - Failed to create network
>>> device
>>> >> lxc-start vm1 20180512072653.610 ERROR    lxc_start -
>>> >> start.c:lxc_spawn:1545 - Failed to create the network
>>> >> lxc-start vm1 20180512072653.610 ERROR    lxc_start -
>>> >> start.c:__lxc_start:1866 - Failed to spawn container "vm1"
>>> >> lxc-start vm1 20180512072653.610 ERROR    lxc_container -
>>> >> lxccontainer.c:wait_on_daemonized_start:824 - Received container
>>> state
>>> >> "STOPPING" instead of "RUNNING"
>>> >>
>>> >>
>>> >> Any idea what I have missed that is causing the container netwok to
>>> not
>>> >> be
>>> >> created.
>>> >>
>>> >> Thanks
>>> >>
>>> >> Densha
>>> >>
>>> >>
>>> >>
>>> >>
>>> >>
>>> >>
>>> >>
>>> >>
>>> >>
>>> >>
>>> >> _______________________________________________
>>> >> discuss mailing list
>>> >> discuss at openvswitch.org
>>> >> https://mail.openvswitch.org/mailman/listinfo/ovs-discuss
>>> >>
>>> >
>>> >
>>> >
>>> > --
>>> > Gilbert Standen
>>> > Creator Orabuntu-LXC
>>> > 914-261-4594
>>> > gilbert at orabuntu-lxc.com
>>> >
>>>
>>>
>>> _______________________________________________
>>> discuss mailing list
>>> discuss at openvswitch.org
>>> https://mail.openvswitch.org/mailman/listinfo/ovs-discuss
>>>
>>
>>
>>
>> --
>> Gilbert Standen
>> Creator Orabuntu-LXC
>> 914-261-4594
>> gilbert at orabuntu-lxc.com
>>
>
>
> _______________________________________________
> discuss mailing list
> discuss at openvswitch.org
> https://mail.openvswitch.org/mailman/listinfo/ovs-discuss
>




More information about the discuss mailing list