[ovs-discuss] OVS with tunneling and offloading

Édouard Thuleau thuleau at gmail.com
Wed Dec 11 10:37:07 UTC 2013


Hi,

I use OpenStack Neutron with OVS and VXLAN encapsulation.

# ovs-vsctl -V
ovs-vsctl (Open vSwitch) 2.0.0

# uname -r
3.2.0-41-generic

I've got Cisco Nexus fabric and I like to be able to use it with the
maximum frame size (9216 octets) without impact the MTU configuration
of the guest VM.

Compute node use Intel NIC card with driver ixgbe [1] and the KVM
hypervisor with virtio drivers.

Here a compute node configuration with a VM attach to a virtual network:

VM -- tap -- qbr -- qvo/qvb (veth) -- br-int -- patch port -- br-tun
-- VXLAN tunnel port    |x|    VLAN interface with tunnel IP -- ethX
-- wire

The Linux bridge is use to apply firewalling with netfilter.

I set the MTU of interfaces ethX, vlan and br-tun to 9216 and veth
(qvo and qvb) to 9166. I don't change the VM guest MTU.
If I send a large ping (ping -s 9138 ...) between 2 VM on different
compute nodes, the jumbo frame are use. The VXLAN packet size on the
wire is 9234 octets. I see fragmented ICMP packets (1514 octets) going
through the VM tap interfaces and packets are defragmented from qbr
linux bridge to the wire.
If I make the same test with TCP flow (iperf), the jumbo frame isn't
use. I see a large TCP packets sent from VM tap interface (~65k
octets) thanks to TSO, but that packets are fragmented to 1500 octets
on veth interfaces (large packet of ~65k on qvb and 1514 on qvo).
If I change the MTU of the VM tap interface and on the VM guest ethX
interface to 9166, I'm able to use jumbo frame packet size on the
wire.

I saw the veth Linux implementation have some bug fixed on recent
kernel [2][3]. So I try to replace the veth interface between the
Linux bridge (qbr) and OVS bridge (br-int) by an OVS internal port.
The compute node configuration become:

VM -- tap -- qbr -- qvo (OVS internal port) -- br-int -- patch port --
br-tun -- VXLAN tunnel port    |x|    VLAN interface with tunnel IP --
ethX -- wire

And I test again but results are identical.
In that case, when I capture packet on the OVS internal port (qvo),
the packet size is large (~65k). I saw fragmented packet when I
capture packet on the VLAN interface (1564 octets) and on the physical
interface (1568 octets).
If I change the MTU of the VM tap interface and on the VM guest ethX
interface to 9166, the iperf test doesn't work. There's fragments
failed on the sender compute node. I saw large packet on tap VM
interface and Linux bridge (9180) and they are dropped by OVS internal
port. Is it possible to change the MTU of an internal port ?


In that tests I didn't change offloading configuration [4].
Do you think it's possible to use offloading functions to exploit
jumbo frame on the physical fabric without impact the MTU
configuration of VM guest interfaces ?

[1] http://paste.openstack.org/show/54347/
[2] https://github.com/torvalds/linux/commit/82d8189826d54740607e6a240e602850ef62a07d
[3] https://github.com/torvalds/linux/commit/b69bbddfa136dc53ac319d58bc38b41f8aefffea
[4] http://paste.openstack.org/show/54749/

Regards,
Édouard.



More information about the discuss mailing list