[ovs-discuss] Issue with offloading OVS flows into Mellanox-4 cards

Roi Dayan roid at mellanox.com
Sun Jul 30 10:03:51 UTC 2017


Thanks for the info.



From: Sugu Deepthy [mailto:deepthysugesh at gmail.com]
Sent: Thursday, July 27, 2017 12:58 PM
To: Roi Dayan <roid at mellanox.com>
Cc: ovs-discuss at openvswitch.org
Subject: Re: [ovs-discuss] Issue with offloading OVS flows into Mellanox-4 cards


Hi Roi,
Thank you for the help,
Upgraded the firmware to 14.20 and used latest kernel(4.10) in VM.
Now its working correctly. I can forward packets between VM and physical ports in the NIC. The oflloaded flows are showing in the OVS.

Few suggestions while preparing the installation document for hardware offload.
1) Must need to provide minimum kernel version to use this feature.
2) The default MLNX firmware is not supporting the hardware offload for some reason. Must specify what version of firmware and supported NICs
3) Even though I use the ethernet NIC, I have to install the IB verbs src in the VM for attaching the VF to the DPDK. Not sure why this is a prerequisite

Once again thank for the suggestions to make it working. :)

On Mon, Jul 24, 2017 at 9:05 AM, Sugu Deepthy <deepthysugesh at gmail.com<mailto:deepthysugesh at gmail.com>> wrote:


On Mon, Jul 24, 2017 at 5:46 AM, Roi Dayan <roid at mellanox.com<mailto:roid at mellanox.com>> wrote:
<snip..>


[Sugu] I upgraded the system and now I dont see this error anymore.
Instead I see this

[ 1103.216355] mlx5_3:wait_for_async_commands:722:(pid 3097): done with
all pending requests
[ 1115.954770] mlx5_core 0000:07:00.0: mlx5_cmd_check:697:(pid 3477):
QUERY_VPORT_COUNTER(0x770) op_mod(0x0) failed, status bad system
state(0x4), syndrome (0x368b01)
[ 1115.954902] mlx5_core 0000:07:00.0: mlx5_cmd_check:697:(pid 3477):
QUERY_VPORT_COUNTER(0x770) op_mod(0x0) failed, status bad system
state(0x4), syndrome (0x368b01)

I am getting this error back to back for every command(2 entry for each
command as I have 2 VFs, may be?)
starting from unbind, devlink, ethtool and starting the VM.
And inside the VM the VFs are not bound to any driver either. Is there
any wrong with the NIC?


looks like the syndrome you get is caused by querying a counter while
the HCA is not yes configured properly.
can you verify you are using the latest firmware?
can you verify the steps you do? did you enable sriov and moved to
switchdev mode?
[Sugu] Ok. SR-IOV is enabled on the board. and the device is moved to
switchdev mode though it throws the error that shown above.

The firmware version of the card is
# ethtool -i ens786f0
driver: mlx5_core
version: 3.0-1 (January 2015)
firmware-version: 14.17.2032
expansion-rom-version:
bus-info: 0000:07:00.0
supports-statistics: yes
supports-test: yes
supports-eeprom-access: no
supports-register-dump: no
supports-priv-flags: yes

Do you think this version firmware cannot support the offload??
Will try to install the latest firmware and keep you posted.







        I verfied that the ports named eth1, eth2, eth3 and et4 are
        created for
        my vfs, when
        I ran the commands 'devlink dev eswitch set pci/0000:07:00.0 mode
        switchdev' and
        'devlink dev eswitch set pci/0000:07:00.1 mode switchdev'

        The detailed error in dmesg are given below,
        [ 1245.941287] mlx5_core 0000:07:00.0: mlx5_cmd_check:697:(pid
        3107):
        QUERY_VPORT_COUNTER(0x770) op_mod(0x0) failed, status bad system
        state(0x4), syndrome (0x368b01)
        [ 1245.941478] mlx5_core 0000:07:00.1: mlx5_cmd_check:697:(pid
        3107):
        QUERY_VPORT_COUNTER(0x770) op_mod(0x0) failed, status bad system
        state(0x4), syndrome (0x368b01)

        Please note I couldn't run the "inline-mode transport" command
        as its
        not supported.


    maybe you need newer iproute package. try to install latest upstream.

[Sugu]
I am using latest Ubuntu release


No LSB modules are available.
Distributor ID: Ubuntu
Description:    Ubuntu Artful Aardvark (development branch)
Release:        17.10
Codename:       artful

and my kernel is
4.11.0-10-generic #15-Ubuntu SMP Thu Jun 29 15:03:41 UTC 2017 x86_64
x86_64 x86_64 GNU/Linux

And still it need to install the newer iproute package additonally? Is
that the requirement to use the hardware offload in OVS?
And my iproute version is
ip -V
ip utility, iproute2-ss161212
Can you share which version of iproute you use for the testing?

I'm using latest upstream. I'm not sure if all needed patches are in
Ubuntu distro.
my versions looks like this: ip utility, iproute2-ss170501

if you have devlink and you can change mode to switchdev without an
error then it's ok to start going.
[Sugu] Ok. Thank you for confirming.






                We still need to work on docs for this feature but for now I
                documented it a little here:
                https://github.com/roidayan/ovs/wiki<https://emea01.safelinks.protection.outlook.com/?url=https%3A%2F%2Fgithub.com%2Froidayan%2Fovs%2Fwiki&data=02%7C01%7Croid%40mellanox.com%7C4754115f3a0741791a3708d4d4d5ecf4%7Ca652971c7d2e4d9ba6a4d149256f461b%7C0%7C0%7C636367462643441348&sdata=SeTNJhrB9BJqE0FSSRqzH%2F%2BKYOnknLIzcP3tb5coUnI%3D&reserved=0>
        <https://emea01.safelinks.protection.outlook.com/?url=https%3A%2F%2Fgithub.com%2Froidayan%2Fovs%2Fwiki&data=02%7C01%7Croid%40mellanox.com%7C2bbfa311e8124ec5ded508d4d212e423%7Ca652971c7d2e4d9ba6a4d149256f461b%7C0%7C0%7C636364425969048836&sdata=C1Hc08dwe3cjYKIgUyNeCbHI%2FnuZlITuPhPpdyZYyME%3D&reserved=0>

        <https://emea01.safelinks.protection.outlook.com/?url=https%3A%2F%2Fgithub.com%2Froidayan%2Fovs%2Fwiki&data=02%7C01%7Croid%40mellanox.com%7C56f73b8b334b4413dd3608d4c84feee7%7Ca652971c7d2e4d9ba6a4d149256f461b%7C0%7C0%7C636353693008610668&sdata=3orv%2FK9Diwoj5pMQAuBmRHF5QRuxNlwmZgOa3f1AaTE%3D&reserved=0
        <https://emea01.safelinks.protection.outlook.com/?url=https%3A%2F%2Fgithub.com%2Froidayan%2Fovs%2Fwiki&data=02%7C01%7Croid%40mellanox.com%7C56f73b8b334b4413dd3608d4c84feee7%7Ca652971c7d2e4d9ba6a4d149256f461b%7C0%7C0%7C636353693008610668&sdata=3orv%2FK9Diwoj5pMQAuBmRHF5QRuxNlwmZgOa3f1AaTE%3D&reserved=0>>

            As suggested in the wiki,



                Thanks,
                Roi




                    _______________________________________________
                    discuss mailing list
                    discuss at openvswitch.org<mailto:discuss at openvswitch.org>
        <mailto:discuss at openvswitch.org<mailto:discuss at openvswitch.org>> <mailto:discuss at openvswitch.org<mailto:discuss at openvswitch.org>
        <mailto:discuss at openvswitch.org<mailto:discuss at openvswitch.org>>>

        https://emea01.safelinks.protection.outlook.com/?url=https%3A%2F%2Fmail.openvswitch.org%2Fmailman%2Flistinfo%2Fovs-discuss&data=02%7C01%7Croid%40mellanox.com%7Cb226a368b9814cdc87ce08d4c5530730%7Ca652971c7d2e4d9ba6a4d149256f461b%7C0%7C0%7C636350407766292115&sdata=9mMWoehygP7%2BmftGsOuyynyaHnYx%2FKQzka7gedr1%2FUE%3D&reserved=0
        <https://emea01.safelinks.protection.outlook.com/?url=https%3A%2F%2Fmail.openvswitch.org%2Fmailman%2Flistinfo%2Fovs-discuss&data=02%7C01%7Croid%40mellanox.com%7Cb226a368b9814cdc87ce08d4c5530730%7Ca652971c7d2e4d9ba6a4d149256f461b%7C0%7C0%7C636350407766292115&sdata=9mMWoehygP7%2BmftGsOuyynyaHnYx%2FKQzka7gedr1%2FUE%3D&reserved=0>

        <https://emea01.safelinks.protection.outlook.com/?url=https%3A%2F%2Fmail.openvswitch.org%2Fmailman%2Flistinfo%2Fovs-discuss&data=02%7C01%7Croid%40mellanox.com%7Cb226a368b9814cdc87ce08d4c5530730%7Ca652971c7d2e4d9ba6a4d149256f461b%7C0%7C0%7C636350407766292115&sdata=9mMWoehygP7%2BmftGsOuyynyaHnYx%2FKQzka7gedr1%2FUE%3D&reserved=0
        <https://emea01.safelinks.protection.outlook.com/?url=https%3A%2F%2Fmail.openvswitch.org%2Fmailman%2Flistinfo%2Fovs-discuss&data=02%7C01%7Croid%40mellanox.com%7Cb226a368b9814cdc87ce08d4c5530730%7Ca652971c7d2e4d9ba6a4d149256f461b%7C0%7C0%7C636350407766292115&sdata=9mMWoehygP7%2BmftGsOuyynyaHnYx%2FKQzka7gedr1%2FUE%3D&reserved=0>>





-------------- next part --------------
An HTML attachment was scrubbed...
URL: <http://mail.openvswitch.org/pipermail/ovs-discuss/attachments/20170730/376ea01d/attachment-0001.html>


More information about the discuss mailing list