[ovs-discuss] Xen/OVS - VLAN offloading

Eugene Istomin e.istomin at edss.ee
Fri May 31 08:17:00 UTC 2013


> This is dependent on the hypervisor and not being restricted by OVS.
> However, the direct benefits of vlan offloading are very minimal and
> are really only useful in relation to enabling other offloads.

In my testbed results are not minimal (untagged by OVS have ~2 times more bandwith than untagged by VM)

All interfaces have MTU=9000


1)untagged by VM interface (in OVS like "trunks: [1002]")

#atop from VM
NET | transport    | tcpi   22733 | tcpo   80191 | udpi       0 | udpo       4 |
NET | eth0    ---- | pcki   22736 | pcko   80243 | si   12 Mbps | so 5777 Mbps |
NET | vlan100 ---- | pcki   22738 | pcko   80245 | si 9495 Kbps | so 5775 Mbps |

#atop from Dom0
CPU | sys      57% | irq      39%   
cpu | sys      58% | irq      41%  
..
NET | vif1.0  ---- |  pcki  227727 | pcko  797502  | si   10 Mbps |  so 5743 Mbps 
NET | vif2.0  ---- |  pcki  797748 | pcko  227717  | si 5736 Mbps |  so   12 Mbps 



2) untagged by OVS interface (in OVS like "tag: 1002")
#atop from VM  - untagged by OVS interface
NET | transport    | tcpi    8495 | tcpo  163131 | udpi       0 | udpo       0
NET | eth1    ---- | pcki    8495 | pcko   24718 | si 4485 Kbps | so   11 Gbps

#atop from Dom0
CPU | sys      96% | irq       4%  
cpu | sys      96% | irq       4% 
..
NET | vif1.1  ---- |  pcki   75974 | pcko  247608  | si 3160 Kbps |  so   11 Gbps
NET | vif2.1  ---- |  pcki  247616 | pcko   75971  | si   11 Gbps |  so 4011 Kbps 


As you can see second variant have full netback sys load in DOM0. Second have high number 
of irq and high numbers of pcki/pcko.
Is this behavior correct?
-- 
Best regards,
Eugene Istomin



On Friday, May 31, 2013 02:57:27 PM Jesse Gross wrote:
> On Fri, May 31, 2013 at 12:52 PM, Eugene Istomin <e.istomin at edss.ee> wrote:
> > Hello,
> > 
> > 
> > 
> > i'm trying to understand Xen/OVS - VLAN offloading
> > 
> > 
> > 
> > 
> > 
> > In my testbed:
> > 
> > 
> > 
> > #ovs-vsctl show
> > 
> > Bridge vlannet
> > 
> > Port "vif5.0"
> > 
> > tag: 1002
> > 
> > Interface "vif5.0"
> > 
> > Port vlannet-bond
> > 
> > Interface "vlannet2"
> > 
> > Interface "vlannet1"
> > 
> > Port vlannet
> > 
> > Interface vlannet
> > 
> > type: internal
> > 
> > Port "vif3.0"
> > 
> > tag: 1002
> > 
> > Interface "vif3.0"
> > 
> > ovs_version: "1.10.0"
> > 
> > 
> > 
> > 1) Xen Dom0 HW interface ->
> > 
> > ethtool -k vlannet1
> > 
> > 
> > 
> > ..
> > 
> > rx-vlan-offload: on
> > 
> > tx-vlan-offload: on
> > 
> > rx-vlan-filter: on [fixed]
> > 
> > ..
> > 
> > 
> > 
> > 2) OVS system interface ->
> > 
> > ethtool -k ovs-system
> > 
> > 
> > 
> > ..
> > 
> > rx-vlan-offload: off [fixed]
> > 
> > tx-vlan-offload: on
> > 
> > rx-vlan-filter: off [fixed]
> > 
> > ..
> > 
> > 
> > 
> > 3) DomU netback interface ->
> > 
> > ethtool -k ovs-system
> > 
> > 
> > 
> > ..
> > 
> > rx-vlan-offload: off [fixed]
> > 
> > tx-vlan-offload: off [fixed]
> > 
> > rx-vlan-filter: off [fixed]
> > 
> > 
> > 
> > ..
> > 
> > 
> > 
> > 
> > 
> > As i see, VLAN offloading is partially implemented in OVS and didn't
> > implemented in Xen means VLAN tagged traffic inside VM will make
> > additional
> > latency.
> > 
> > 
> > 
> > Can anyone have info about OVS->VM VLAN offloading configuration?
> 
> This is dependent on the hypervisor and not being restricted by OVS.
> However, the direct benefits of vlan offloading are very minimal and
> are really only useful in relation to enabling other offloads.



More information about the discuss mailing list