[ovs-discuss] MTU/fragmentation issue in 2.0.1/openstack icehouse using GRE

Tom Christensen pavera at live.com
Sat May 17 02:11:45 UTC 2014



> From: jesse at nicira.com
> Date: Fri, 16 May 2014 12:11:01 -0700
> Subject: Re: [ovs-discuss] MTU/fragmentation issue in 2.0.1/openstack icehouse using GRE
> To: pavera at live.com
> CC: discuss at openvswitch.org
> 
> On Thu, May 15, 2014 at 8:42 PM, Tom Christensen <pavera at live.com> wrote:
> >
> >
> >> From: jesse at nicira.com
> >> Date: Thu, 15 May 2014 14:43:12 -0700
> >> Subject: Re: [ovs-discuss] MTU/fragmentation issue in 2.0.1/openstack
> >> icehouse using GRE
> >> To: pavera at live.com
> >> CC: discuss at openvswitch.org
> >>
> >> On Wed, May 14, 2014 at 8:18 PM, Tom Christensen <pavera at live.com> wrote:
> >> > I'm seeing an issue in openstack icehouse running on ubuntu 14.04, ovs
> >> > version 2.0.1, package version 2.0.1+git20140120-0ubuntu2 where gre
> >> > packets
> >> > cannot pass between 2 ovs bridges (br-int and br-tun) when the original
> >> > source packet is larger than 1438 bytes (mtu set to 1500 everywhere).
> >> > I've
> >> > confirmed that openstack havana running on ubuntu 12.04 (ovs 1.10.2)
> >> > does
> >> > not have this same issue, so it feels like a regression. I don't know
> >> > where
> >> > this should be reported, or if its been fixed in subsequent versions of
> >> > openvswitch.
> >> >
> >> > I also am pretty new to openvswitch so, any help in really nailing down
> >> > exactly what is going on inside these bridges would be appreciated
> >> > greatly.
> >>
> >> Tunnel fragmentation isn't really supported well in OVS and even in
> >> cases where it does work it performs poorly. It is best if you either
> >> increase the MTU on the physical network or decrease it in the sending
> >> VMs.
> >
> > To be clear, with the MTU set at 1400 in the VM, the size of packet that
> > causes the problem remains the same.  If you ping -s 1430 <vm on different
> > host> it will work, the packet will be fragmented on the tap interface, and
> > will reach the other host and vm.  If you ping -s 1431 <vm on different
> > host> it will be fragmented on the tap interface, but will not leave the
> > source host, arrive at the destination host, or arrive at the destination
> > vm.
> 
> This doesn't make a lot of sense to me because the only difference
> between these two cases is that the second fragment will be slightly
> larger but still smaller than the first.
> 
> Have you run tcpdump on each interface in the path to the confirm that
> the packet sizes are what you expect?

I have, and it appears the qbr/qvo/qvb interfaces (the linux bridge interfaces) that neutron creates for its security groups are the culprit.  On the tap interface the packet is fragmented, but the above interfaces appear to be reconstructing the packets, on those interfaces all of the packets are 1439 in size (on a 1431 ping size).... 
Today we were able to pass traffic by decreasing the mtu on those interfaces manually... however I can't find a config option in open stack to set the mtu of those interfaces.... After having success with that though, we were able to increase the mtu on the physical network and resolve the issue.... Previously I was testing in a vmware workstation environment where I didn't have access to set the physical mtu... So, count this one as resolved. 

 		 	   		  
-------------- next part --------------
An HTML attachment was scrubbed...
URL: <http://openvswitch.org/pipermail/ovs-discuss/attachments/20140516/e639a811/attachment-0002.html>


More information about the discuss mailing list