[ovs-discuss] network bandwidth in Openstack when using OVS+VLAN

Luca Giraudo lgiraudo at nicira.com
Tue Jul 30 12:34:17 UTC 2013


Can you monitor the CPU in the receiving node? RX is the difficult part and
in general there will be some process taking 100% of a CPU core.

Luca
On Jul 30, 2013 1:47 AM, "Li, Chen" <chen.li at intel.com> wrote:

>  For 6 Pairs CPU% in 1 second:****
>
> ** **
>
> 04:11:08    PM      CPU                 %usr           %nice
>              %sys               %iowait       %irq                %soft
>           %steal          %guest        %idle****
>
> 04:11:36    PM        all                0.42                0.00
>                4.22                0.03                0.00
> 8.56                0.00                7.82            78.95****
>
> 04:11:36    PM                0                0.00
> 0.00                7.69                0.00                0.00
> 13.19                0.00            13.19            65.93****
>
> 04:11:36    PM                1                0.00
>               0.00                8.42                0.00
> 0.00            18.95                0.00                8.42
> 64.21****
>
> 04:11:36    PM                2                1.02
> 0.00                6.12                0.00                0.00
> 13.27                0.00            17.35            62.24****
>
> 04:11:36    PM                3                0.00
> 0.00            10.10                0.00                0.00
> 24.24                0.00            10.10            55.56****
>
> 04:11:36    PM                4                0.00
> 0.00            10.42                0.00                0.00
> 30.21                0.00                4.17            55.21****
>
> 04:11:36    PM                5                0.00
> 0.00            10.64                0.00                0.00
> 17.02                0.00            10.64            61.70****
>
> 04:11:36    PM                6                0.00
>   0.00                9.68                0.00
> 0.00            18.28                0.00            16.13            56.99
> ****
>
> 04:11:36    PM                7                0.00
> 0.00                9.78                0.00                0.00
> 27.17                0.00                4.35            58.70****
>
> 04:11:36    PM                8                1.08
> 0.00            10.75                0.00                0.00
> 20.43                0.00            10.75            56.99****
>
> 04:11:36    PM                9                1.04
> 0.00                9.38                0.00                0.00
> 17.71                0.00            11.46            60.42****
>
> 04:11:36    PM            10                0.99
> 0.00                7.92                0.00
> 0.00                9.90                0.00            13.86
> 67.33****
>
> 04:11:36    PM            11                1.04                0.00
>           4.17                0.00                0.00
> 5.21                0.00            27.08            62.50****
>
> 04:11:36    PM            12                0.00
> 0.00                3.09                0.00                0.00
> 10.31                0.00            14.43            72.16****
>
> 04:11:36    PM            13                0.00
> 0.00            11.00                0.00                0.00
> 15.00                0.00            18.00            56.00****
>
> 04:11:36    PM            14                0.00
> 0.00                6.45                0.00                0.00
> 17.20                0.00            15.05            61.29****
>
> 04:11:36    PM            15                0.00
> 0.00                7.29                0.00                0.00
> 19.79                0.00            21.88            52.08****
>
> 04:11:36    PM            16                0.00
> 0.00                0.00                0.00
> 0.00                0.00                0.00                0.00
> 100.00****
>
> 04:11:36    PM            17                0.00
> 0.00                1.00                0.00
> 0.00                0.00                0.00                0.00
> 99.00****
>
> 04:11:36    PM            18                0.00
> 0.00                0.00                0.00
> 0.00                0.00                0.00                0.00
> 100.00****
>
> 04:11:36    PM            19                0.00
> 0.00                0.00                0.00
> 0.00                0.00                0.00                0.00
> 100.00****
>
> 04:11:36    PM            20                6.00
>              0.00                0.00                0.00
> 0.00                0.00                0.00            16.00
> 78.00****
>
> 04:11:36    PM            21                0.00
> 0.00                0.00                0.00
> 0.00                0.00                0.00                0.00
> 100.00****
>
> 04:11:36    PM            22                0.00
> 0.00                1.00                0.00
> 0.00                0.00                0.00                0.00
> 99.00****
>
> 04:11:36    PM            23                0.00
> 0.00                0.00                0.00
> 0.00                0.00                0.00                0.00
> 100.00****
>
> 04:11:36    PM            24                1.00
> 0.00                0.00                0.00
> 0.00                0.00                0.00                0.00
> 99.00****
>
> 04:11:36    PM            25                0.00
>           0.00                1.00                0.00
> 0.00                2.00                0.00                0.00
> 97.00****
>
> 04:11:36    PM            26                4.95
> 0.00                0.99                0.00
> 0.00                0.00                0.00                0.00
> 94.06****
>
> 04:11:36    PM            27                0.00
> 0.00                0.00                0.00
> 0.00                0.00                0.00                0.00
> 100.00****
>
> 04:11:36    PM            28                0.00
> 0.00                0.00                0.00
> 0.00                0.00                0.00                0.00
> 100.00****
>
> 04:11:36    PM            29                0.00
> 0.00                0.00                0.00
> 0.00                0.00                0.00                0.00
> 100.00****
>
> 04:11:36    PM            30                0.00
>            0.00                0.99                0.00
> 0.00                0.00                0.00            10.89
> 88.12****
>
> 04:11:36    PM            31                0.00
> 0.00                0.99                0.00
> 0.00                0.00                0.00                7.92
> 92.08****
>
> ** **
>
> ** **
>
> *From:* discuss-bounces at openvswitch.org [mailto:
> discuss-bounces at openvswitch.org] *On Behalf Of *Li, Chen
> *Sent:* Tuesday, July 30, 2013 4:42 PM
> *To:* Luca Giraudo
> *Cc:* discuss at openvswitch.org
> *Subject:* Re: [ovs-discuss] network bandwidth in Openstack when using
> OVS+VLAN****
>
> ** **
>
> ** are you using KVM? If yes, can you verify that your VMs are using
> vhost_net, please? In the KVM process related to the VM there should be a
> vhost=on parameter. If not, modprobe vhost_net.*
>
> Yes.****
>
> And after enable vhost_net, I can get higher bandwidth now:****
>
> One pair, increase from *1.18* Gbits/sec  => *2.38* Gbits/sec  ****
>
> Six pairs,  increase from  *4.25 *Gbits/sec  =>  *6.98* Gbits/sec  ** **
>
> ** **
>
> ** I saw different throughput with different OS in the VM. Did you try
> different VMs?*
>
> No, but I guess different OS in VM might cause performance difference like
> 5Gbit/sec vs. 6Gbit/sec, but not 2Gbit/s vs. 6Gbit/sec****
>
> ** what bandwidth do you get among two VMs in the same compute node?*
>
> *14.7 *Gbits/sec**
>
> ** can you monitor the CPU usage in the compute node and look out for big
> CPU consumer, please?*
>
> I’m running on a SNB-EP server, so I have 32 cores (HT enabled) for one
> compute node.****
>
> ** **
>
> *Only observed the compute node that run iperf client (send packages out)*
>
> * *
>
> **1.      **The average CPU%****
>
> Total average CPU% = 6.66 %****
>
> Average:     CPU    %usr   %nice    %sys   %iowait    %irq   %soft
> %steal  %guest   %idle****
>
> Average:     all         1.66    0.00      1.38      0.05         0.00
> 1.83    0.00       1.74      93.34****
>
> ** **
>
> Per-core :****
>
> The highest CPU% = 21.65%****
>
> Average:     CPU    %usr   %nice    %sys    %iowait     %irq   %soft
>   %steal    %guest   %idle****
>
> Average:       0          0.30     0.00      5.29       0.02        0.00
> *11.29*      0.00       4.75      78.35****
>
> ** **
>
> **2.      **CPU% time line : only check the CPU core with the highest
> CPU% ****
>
> 02:23:29 PM  CPU    %usr   %nice    %sys     %iowait    %irq        %soft
>   %steal   %guest   %idle****
>
> 02:23:29 PM    0         0.00    0.00      2.06       0.00         0.00
>       4.12      0.00        0.00       93.81****
>
> 02:23:30 PM    0         0.00    0.00     25.77       0.00         0.00
>       67.01     0.00        0.00      *7.22*****
>
> 02:23:31 PM    0         0.00    0.00     26.04       0.00         0.00
>       59.38    0.00        0.00       14.58****
>
> 02:23:32 PM    0         0.00    0.00     23.00       0.00         0.00
>        52.00    0.00         0.00       25.00****
>
> 02:23:33 PM    0         0.00    0.00     28.28       0.00         0.00
>        52.53    0.00        0.00      19.19****
>
> 02:23:34 PM    0         0.00    0.00     10.89       0.00         0.00
>       19.80    0.00        0.00       69.31****
>
> 02:23:35 PM    0         1.00    0.00      0.00       0.00         0.00
>         0.00      0.00        0.00      99.00****
>
> 02:23:36 PM    0         0.00    0.00      0.00       0.00         0.00
>       0.00     0.00        0.00       100.00****
>
> 02:23:37 PM    0         0.00    0.00      0.00       0.00         0.00
>       0.00     0.00        1.00        99.00****
>
> 02:23:38 PM    0         0.99    0.00      0.00       0.00         0.00
>        0.00     0.00        0.00       99.01****
>
> 02:23:39 PM    0         0.00    0.00      1.00       0.00         0.00
>       0.00     0.00        0.00       99.00****
>
> 02:23:40 PM    0         1.01    0.00      0.00       0.00         0.00
>       0.00     0.00        0.00       98.99****
>
> 02:23:41 PM    0         2.06    0.00     15.46       0.00         0.00
>      36.08    0.00        0.00      46.39****
>
> 02:23:42 PM    0         1.98    0.00     12.87       0.00         0.00
>       34.65    0.00        0.00      50.50****
>
> 02:23:43 PM    0         0.00    0.00      0.00       0.00         0.00
>        0.00      0.00       15.15      84.85****
>
> 02:23:44 PM    0         0.99    0.00      0.99       0.00         0.00
>       0.00     0.00        0.00       98.02****
>
> 02:23:45 PM    0         0.00    0.00      2.00       0.00         0.00
>        0.00     0.00       19.00       80.00****
>
> 02:23:46 PM    0         0.99    0.00      0.00       0.00         0.00
>        0.00     0.00       23.76       75.25****
>
> 02:23:47 PM    0         0.00    0.00     24.49       0.00         0.00
>       40.82    0.00        0.00       34.69****
>
> 02:23:48 PM    0         0.00    0.00     23.96       0.00         0.00
>       60.42    0.00        0.00      15.62****
>
> 02:23:49 PM    0         0.00    0.00     23.96       0.00         0.00
>        60.42    0.00        0.00      15.62****
>
> 02:23:50 PM    0         0.00    0.00      6.12       0.00         0.00
>        18.37     0.00       0.00       75.51****
>
> 02:23:51 PM    0         1.00    0.00      0.00       0.00         0.00
>        0.00      0.00       0.00       99.00****
>
> ** **
>
> ** **
>
> *From:* Luca Giraudo [mailto:lgiraudo at nicira.com <lgiraudo at nicira.com>]
> *Sent:* Tuesday, July 30, 2013 1:37 PM
> *To:* Li, Chen
> *Cc:* discuss at openvswitch.org; Gurucharan Shetty
> *Subject:* Re: [ovs-discuss] network bandwidth in Openstack when using
> OVS+VLAN****
>
> ** **
>
> Few more things to check:****
>
> * are you using KVM? If yes, can you verify that your VMs are using
> vhost_net, please? In the KVM process related to the VM there should be a
> vhost=on parameter. If not, modprobe vhost_net.****
>
> * I saw different throughput with different OS in the VM. Did you try
> different VMs?****
>
> * what bandwidth do you get among two VMs in the same compute node?****
>
> * can you monitor the CPU usage in the compute node and look out for big
> CPU consumer, please?****
>
> Thanks,
> Luca****
>
> On Jul 29, 2013 6:26 PM, "Li, Chen" <chen.li at intel.com> wrote:****
>
> ** Is the VM ethernet driver a para-virtual driver? Para-virtual drivers
> give a good performance boost.*****
>
> I have used openstack default parameters, it is virtio, I think virtio
> should have a good performance:****
>
>     <interface type='bridge'>****
>
>       <mac address='fa:16:3e:ca:4a:86'/>****
>
>       <source bridge='br-int'/>****
>
>       <virtualport type='openvswitch'>****
>
>         <parameters interfaceid='3213dbec-f2ea-462f-818b-e07b76a1752c'/>**
> **
>
>       </virtualport>****
>
>       <target dev='tap3213dbec-f2'/>****
>
>       <model type='virtio'/>****
>
>       <address type='pci' domain='0x0000' bus='0x00' slot='0x03'
> function='0x0'/>****
>
>     </interface>****
>
>  ****
>
>  ****
>
> ** Is TSO ON in the VM and the Hypervisor?*****
>
> * *****
>
> The VM:****
>
> Features for eth0:****
>
> rx-checksumming: off [fixed]****
>
> tx-checksumming: on****
>
>         tx-checksum-ipv4: off [fixed]****
>
>         tx-checksum-ip-generic: on****
>
>         tx-checksum-ipv6: off [fixed]****
>
>         tx-checksum-fcoe-crc: off [fixed]****
>
>         tx-checksum-sctp: off [fixed]****
>
> scatter-gather: on****
>
>         tx-scatter-gather: on****
>
>         tx-scatter-gather-fraglist: on****
>
> *tcp-segmentation-offload: on*****
>
>         tx-tcp-segmentation: on****
>
>         tx-tcp-ecn-segmentation: on****
>
>         tx-tcp6-segmentation: on****
>
> udp-fragmentation-offload: on****
>
> generic-segmentation-offload: on****
>
> generic-receive-offload: on****
>
> large-receive-offload: off [fixed]****
>
> rx-vlan-offload: off [fixed]****
>
> tx-vlan-offload: off [fixed]****
>
> ntuple-filters: off [fixed]****
>
> receive-hashing: off [fixed]****
>
> highdma: on [fixed]****
>
> rx-vlan-filter: on [fixed]****
>
> vlan-challenged: off [fixed]****
>
> tx-lockless: off [fixed]****
>
> netns-local: off [fixed]****
>
> tx-gso-robust: off [fixed]****
>
> tx-fcoe-segmentation: off [fixed]****
>
> fcoe-mtu: off [fixed]****
>
> tx-nocache-copy: on****
>
> loopback: off [fixed]****
>
> rx-fcs: off [fixed]****
>
> rx-all: off [fixed]****
>
> The hypervisor:****
>
> Offload parameters for eth4:****
>
> rx-checksumming: on****
>
> tx-checksumming: on****
>
> scatter-gather: on****
>
> *tcp-segmentation-offload: on*****
>
> udp-fragmentation-offload: off****
>
> generic-segmentation-offload: on****
>
> generic-receive-offload: on****
>
> large-receive-offload: off****
>
> rx-vlan-offload: on****
>
> tx-vlan-offload: on****
>
> ntuple-filters: off****
>
> receive-hashing: on****
>
>  ****
>
> ** What throughput do you get while using Linux bridge instead of OVS?****
> *
>
> Currently, I don’t have a linux bridge environment.****
>
> But, I remember in virtio test, while I create bridge by hand and assigned
> it to an instance, I can always get near hardware limitation bandwidth if I
> have enough threads.****
>
>  ****
>
> ** Are you using tunnels? If you are using a tunnel like GRE, you will
> see a throughput drop.*****
>
> No, I’m working under Quantum+OVS+*VLAN*.****
>
>  ****
>
> Thanks.****
>
> -chen****
>
>  ****
>
> *From:* Gurucharan Shetty [mailto:shettyg at nicira.com]
> *Sent:* Tuesday, July 30, 2013 12:06 AM
> *To:* Li, Chen
> *Cc:* discuss at openvswitch.org
> *Subject:* Re: [ovs-discuss] network bandwidth in Openstack when using
> OVS+VLAN****
>
>  ****
>
> There could be multiple reasons for the low throughput. I would probably
> look at the following.****
>
>  ****
>
> * Is the VM ethernet driver a para-virtual driver? Para-virtual drivers
> give a good performance boost.****
>
> * Is TSO ON in the VM and the Hypervisor?****
>
> * What throughput do you get while using Linux bridge instead of OVS?****
>
> * Are you using tunnels? If you are using a tunnel like GRE, you will see
> a throughput drop.****
>
>  ****
>
>  ****
>
> On Mon, Jul 29, 2013 at 1:48 AM, Li, Chen <chen.li at intel.com> wrote:****
>
> Hi list,****
>
>  ****
>
> I’m a new user to OVS.****
>
>  ****
>
> I installed OpenStack Grizzly, and  using Quantum + OVS + VLAN for network.
> ****
>
>  ****
>
> I have two compute nodes with 10 Gb NICs, and the bandwidth between them
> is about  *8.49* Gbits/sec (tested by iperf).****
>
>  ****
>
> I started one instance at each compute node:****
>
> instance-a => compute1****
>
> instance-b=> compute2****
>
> The bandwidth between this two virtual machine is only *1.18* Gbits/sec.**
> **
>
>   ****
>
> Then I start 6 instances at each compute node:****
>
>           (   instance-a => compute1 ) ----- iperf------ > (instance-b=>
> compute2)****
>
>                           (   instance-c => compute1 ) ----- iperf------ >
> (instance-d=> compute2)****
>
>                           (   instance-e => compute1 ) ----- iperf------ >
> (instance-f=> compute2)****
>
>                           (   instance-g => compute1 ) ----- iperf------ >
> (instance-h=> compute2)****
>
>                           (   instance-i => compute1 ) ----- iperf------ >
> (instance-j=> compute2)****
>
>                           (   instance-k => compute1 ) ----- iperf------ >
> (instance-l=> compute2)****
>
> The total bandwidth is only *4.25 *Gbits/sec.****
>
>  ****
>
>  ****
>
> Anyone know why the performance is this low ?****
>
>  ****
>
> Thanks.****
>
> -chen****
>
>
> _______________________________________________
> discuss mailing list
> discuss at openvswitch.org
> http://openvswitch.org/mailman/listinfo/discuss****
>
>   ****
>
>
> _______________________________________________
> discuss mailing list
> discuss at openvswitch.org
> http://openvswitch.org/mailman/listinfo/discuss****
>
-------------- next part --------------
An HTML attachment was scrubbed...
URL: <http://openvswitch.org/pipermail/ovs-discuss/attachments/20130730/78a9d7ac/attachment.html>


More information about the discuss mailing list