[ovs-discuss] OVS with tunneling and offloading

Jesse Gross jesse at nicira.com
Mon Dec 16 17:44:19 UTC 2013


The guest TCP stack is the connection endpoint and sets the MSS of the
packets. As a result, if you want it to use a different MSS it must
know about the underlying MTU. You might see larger packets at various
points in the stack but this is simply an optimization and the MSS
that the guest requested will ultimately be used before the packet
hits the wire.

On Mon, Dec 16, 2013 at 12:32 AM, Édouard Thuleau <thuleau at gmail.com> wrote:
> Hi all,
>
> I do that tests with a recent kernel (3.11.0) and I still can not use
> Jumbo frame without change the guest MTU. I made that test because I
> though that Linux patch [1] pshed by Nicira can solve my problem.
>
> On thing change, when I use a veth between the Linux bridge (qbr) and
> the OVS bridge (br-int), the packets captured on the both side (qvb
> and qvo) have a big size (40Ko). The veth doesn't break anymore the
> offloading.
>
> [1] http://git.kernel.org/cgit/linux/kernel/git/torvalds/linux.git/commit/?id=731362674580cb0c696cd1b1a03d8461a10cf90a
>
> Any thoughts ?
> Édouard.
>
> On Wed, Dec 11, 2013 at 11:37 AM, Édouard Thuleau <thuleau at gmail.com> wrote:
>> Hi,
>>
>> I use OpenStack Neutron with OVS and VXLAN encapsulation.
>>
>> # ovs-vsctl -V
>> ovs-vsctl (Open vSwitch) 2.0.0
>>
>> # uname -r
>> 3.2.0-41-generic
>>
>> I've got Cisco Nexus fabric and I like to be able to use it with the
>> maximum frame size (9216 octets) without impact the MTU configuration
>> of the guest VM.
>>
>> Compute node use Intel NIC card with driver ixgbe [1] and the KVM
>> hypervisor with virtio drivers.
>>
>> Here a compute node configuration with a VM attach to a virtual network:
>>
>> VM -- tap -- qbr -- qvo/qvb (veth) -- br-int -- patch port -- br-tun
>> -- VXLAN tunnel port    |x|    VLAN interface with tunnel IP -- ethX
>> -- wire
>>
>> The Linux bridge is use to apply firewalling with netfilter.
>>
>> I set the MTU of interfaces ethX, vlan and br-tun to 9216 and veth
>> (qvo and qvb) to 9166. I don't change the VM guest MTU.
>> If I send a large ping (ping -s 9138 ...) between 2 VM on different
>> compute nodes, the jumbo frame are use. The VXLAN packet size on the
>> wire is 9234 octets. I see fragmented ICMP packets (1514 octets) going
>> through the VM tap interfaces and packets are defragmented from qbr
>> linux bridge to the wire.
>> If I make the same test with TCP flow (iperf), the jumbo frame isn't
>> use. I see a large TCP packets sent from VM tap interface (~65k
>> octets) thanks to TSO, but that packets are fragmented to 1500 octets
>> on veth interfaces (large packet of ~65k on qvb and 1514 on qvo).
>> If I change the MTU of the VM tap interface and on the VM guest ethX
>> interface to 9166, I'm able to use jumbo frame packet size on the
>> wire.
>>
>> I saw the veth Linux implementation have some bug fixed on recent
>> kernel [2][3]. So I try to replace the veth interface between the
>> Linux bridge (qbr) and OVS bridge (br-int) by an OVS internal port.
>> The compute node configuration become:
>>
>> VM -- tap -- qbr -- qvo (OVS internal port) -- br-int -- patch port --
>> br-tun -- VXLAN tunnel port    |x|    VLAN interface with tunnel IP --
>> ethX -- wire
>>
>> And I test again but results are identical.
>> In that case, when I capture packet on the OVS internal port (qvo),
>> the packet size is large (~65k). I saw fragmented packet when I
>> capture packet on the VLAN interface (1564 octets) and on the physical
>> interface (1568 octets).
>> If I change the MTU of the VM tap interface and on the VM guest ethX
>> interface to 9166, the iperf test doesn't work. There's fragments
>> failed on the sender compute node. I saw large packet on tap VM
>> interface and Linux bridge (9180) and they are dropped by OVS internal
>> port. Is it possible to change the MTU of an internal port ?
>>
>>
>> In that tests I didn't change offloading configuration [4].
>> Do you think it's possible to use offloading functions to exploit
>> jumbo frame on the physical fabric without impact the MTU
>> configuration of VM guest interfaces ?
>>
>> [1] http://paste.openstack.org/show/54347/
>> [2] https://github.com/torvalds/linux/commit/82d8189826d54740607e6a240e602850ef62a07d
>> [3] https://github.com/torvalds/linux/commit/b69bbddfa136dc53ac319d58bc38b41f8aefffea
>> [4] http://paste.openstack.org/show/54749/
>>
>> Regards,
>> Édouard.
> _______________________________________________
> discuss mailing list
> discuss at openvswitch.org
> http://openvswitch.org/mailman/listinfo/discuss



More information about the discuss mailing list