[ovs-dev] Why is ovs DPDK much worse than ovs in my test case?

=?gb2312?B?WWkgWWFuZyAo0e6gRCkt1Ma3/s7xvK/NxQ==?= yangyi01 at inspur.com
Wed Jul 10 03:58:59 UTC 2019


Hi, all

 

I just use ovs as a static router in my test case, ovs is ran in vagrant VM,
ethernet interfaces uses virtio driver, I create two ovs bridges, each one
adds one ethernet interface, two bridges are connected by patch port, only
default openflow rule is there.

 

table=0, priority=0 actions=NORMAL

 

    Bridge br-int

        Port patch-br-ex

            Interface patch-br-ex

                type: patch

                options: {peer=patch-br-int}

        Port br-int

            Interface br-int

                type: internal

        Port "dpdk0"

            Interface "dpdk0"

                type: dpdk

                options: {dpdk-devargs="0000:00:08.0"}

    Bridge br-ex

        Port "dpdk1"

            Interface "dpdk1"

                type: dpdk

                options: {dpdk-devargs="0000:00:09.0"}

        Port patch-br-int

            Interface patch-br-int

                type: patch

                options: {peer=patch-br-ex}

        Port br-ex

            Interface br-ex

                type: internal

 

But when I run iperf to do performance benchmark, the result shocked me.

 

For ovs nondpdk, the result is

 

vagrant at client1:~$ iperf -t 60 -i 10 -c 192.168.230.101

------------------------------------------------------------

Client connecting to 192.168.230.101, TCP port 5001

TCP window size: 85.0 KByte (default)

------------------------------------------------------------

[  3] local 192.168.200.101 port 53900 connected with 192.168.230.101 port
5001

[ ID] Interval       Transfer     Bandwidth

[  3]  0.0-10.0 sec  1.05 GBytes   905 Mbits/sec

[  3] 10.0-20.0 sec  1.02 GBytes   877 Mbits/sec

[  3] 20.0-30.0 sec  1.07 GBytes   922 Mbits/sec

[  3] 30.0-40.0 sec  1.08 GBytes   927 Mbits/sec

[  3] 40.0-50.0 sec  1.06 GBytes   914 Mbits/sec

[  3] 50.0-60.0 sec  1.07 GBytes   922 Mbits/sec

[  3]  0.0-60.0 sec  6.37 GBytes   911 Mbits/sec

vagrant at client1:~$

 

For ovs dpdk, the bandwidth is just about 45Mbits/sec, why? I really don’t
understand what happened.

 

vagrant at client1:~$ iperf -t 60 -i 10 -c 192.168.230.101

------------------------------------------------------------

Client connecting to 192.168.230.101, TCP port 5001

TCP window size: 85.0 KByte (default)

------------------------------------------------------------

[  3] local 192.168.200.101 port 53908 connected with 192.168.230.101 port
5001

[ ID] Interval       Transfer     Bandwidth

[  3]  0.0-10.0 sec  54.6 MBytes  45.8 Mbits/sec

[  3] 10.0-20.0 sec  55.5 MBytes  46.6 Mbits/sec

[  3] 20.0-30.0 sec  52.5 MBytes  44.0 Mbits/sec

[  3] 30.0-40.0 sec  53.6 MBytes  45.0 Mbits/sec

[  3] 40.0-50.0 sec  54.0 MBytes  45.3 Mbits/sec

[  3] 50.0-60.0 sec  53.9 MBytes  45.2 Mbits/sec

[  3]  0.0-60.0 sec   324 MBytes  45.3 Mbits/sec

vagrant at client1:~$

 

By the way, I tried to pin physical cores to qemu processes which correspond
to ovs pmd threads, but it hardly affects on performance.

 

  PID USER      PR  NI    VIRT    RES    SHR S %CPU %MEM     TIME+ COMMAND
P

16303 yangyi     20   0   9207120 209700 107500 R 99.9  0.1       63:02.37
EMT-1          1

16304 yangyi     20   0   9207120 209700 107500 R 99.9  0.1       69:16.16
EMT-2          2



More information about the dev mailing list