[ovs-dev] vhost-user: port is dropping packets in transmission

Ilya Maximets i.maximets at samsung.com
Wed Mar 23 10:09:33 UTC 2016


On 23.03.2016 13:03, Mauricio Vásquez wrote:
> It works,
> 
> Then the INSTALL.DPDK file is wrong, It says that QEMU version v2.1.0+ is required, it even says that v1.6.2 would work using a different command line.
> 
> Is QEMU 2.5 also required for vhost-cuse?
> 
> Would you mind updating that file as you know the details better than me?
> 
> Thanks.

Actually, now I think that this is a bug.
We should mark default queue pair as enabled by default to
support older versions of QEMU and vhost-cuse.

I'll try to fix OVS to change this behaviour. After that
all will work with older versions of QEMU.

Thanks for reporting.

Best regards, Ilya Maximets.

> 
> 
> On Wed, Mar 23, 2016 at 10:21 AM, Ilya Maximets <i.maximets at samsung.com <mailto:i.maximets at samsung.com>> wrote:
> 
>     On 23.03.2016 12:19, Mauricio Vásquez wrote:
>     > Hi IIya,
>     >
>     > I'm using DPDK 2.2.0 and QEMU 2.2.1.
> 
>     You should use QEMU 2.5.
> 
>     > On Wed, Mar 23, 2016 at 10:18 AM, Ilya Maximets <i.maximets at samsung.com <mailto:i.maximets at samsung.com> <mailto:i.maximets at samsung.com <mailto:i.maximets at samsung.com>>> wrote:
>     >
>     >     What version of DPDK and QEMU you're using with OVS 2.5?
>     >
>     >     On 23.03.2016 12:11, Mauricio Vásquez wrote:
>     >     > Dear all,
>     >     >
>     >     > I am testing a setup where two VMs have to communicate using vhost-user ports, it works using OvS 2.4 but it does not work with the master version nor with 2.5.
>     >     >
>     >     > The setup is quite simple, a pair of VMs connected to OvS using vhost-user ports, two flows configured to forward the packets between the VMs, ping is used to test connectivity between the VMs (ips and routing tables are configured).
>     >     >
>     >     > The problem that I can see is that a vhost-user port is dropping packets on transmission:
>     >     >
>     >     > OFPST_PORT reply (xid=0x2): 3 ports
>     >     > port 1:
>     >     >   rx pkts=330, bytes=14172, drop=?, errs=0, frame=?, over=?, crc=?
>     >     >   tx pkts=0, bytes=0, drop=8, errs=?, coll=?
>     >     >
>     >     > port LOCAL:
>     >     >   rx pkts=0, bytes=0, drop=0, errs=0, frame=0, over=0, crc=0
>     >     >   tx pkts=0, bytes=0, drop=0, errs=0, coll=0
>     >     >
>     >     > port 2:
>     >     >   rx pkts=8, bytes=648, drop=?, errs=0, frame=?, over=?, crc=?
>     >     >   tx pkts=0, bytes=0, drop=330, errs=?, coll=?
>     >     >
>     >     > I found that somebody has a similar problem: http://openvswitch.org/pipermail/dev/2016-March/067152.html, I tried to change pmd-cpu-mask but the problem is always there. I did some debug and the problems is that qid is always -1 in the function __netdev_dpdk_vhost_send.
>     >     >
>     >     > Here it is some extra debug info:
>     >     >
>     >     > ovs-vswithd.log:
>     >     > http://pastebin.com/2CUyjGED
>     >     >
>     >     > ovs-vsctl show
>     >     > Bridge "br0"
>     >     >   Port "br0" Interface "br0" type: internal
>     >     >   Port "vhost-user-2" Interface "vhost-user-2" type: dpdkvhostuser
>     >     >   Port "vhost-user-1" Interface "vhost-user-1" type: dpdkvhostuser
>     >     >
>     >     > ovs-appctl dpctl/show
>     >     > netdev at ovs-netdev:
>     >     > lookups: hit:411 missed:1 lost:0
>     >     >   flows: 1
>     >     >  port 0: ovs-netdev (internal)
>     >     >  port 1: vhost-user-1 (dpdkvhostuser: configured_rx_queues=1,   configured_tx_queues=1, requested_rx_queues=1, requested_tx_queues=9)
>     >     >  port 2: br0 (tap)
>     >     >  port 3: vhost-user-2 (dpdkvhostuser: configured_rx_queues=1,  configured_tx_queues=1, requested_rx_queues=1, requested_tx_queues=9)
>     >     >
>     >     > ovs-appctl dpif-netdev/pmd-rxq-show
>     >     > pmd thread numa_id 0 core_id 0:
>     >     >  port: vhost-user-1 queue-id: 0
>     >     >  port: vhost-user-2 queue-id: 0
>     >     >
>     >     > Thank you very much,
>     >     >
>     >     > Mauricio V,
>     >
>     >
> 
> 



More information about the dev mailing list