[ovs-discuss] Issue with offloading OVS flows into Mellanox-4 cards

Roi Dayan roid at mellanox.com
Mon Jul 24 04:46:00 UTC 2017



On 24/07/2017 00:36, Sugu Deepthy wrote:
> Hi Roi,
> Thank you for  your reply.
> Sorry for not getting back on this before. Was held up in some other stuff.
> Please find my answers below.
>
> On Wed, Jul 12, 2017 at 5:33 AM, Roi Dayan <roid at mellanox.com
> <mailto:roid at mellanox.com>> wrote:
>
>
>
>     On 11/07/2017 14:28, Sugu Deepthy wrote:
>
>         Hi Roi
>
>         On Tue, Jul 11, 2017 at 12:20 AM, Sugu Deepthy
>         <deepthysugesh at gmail.com <mailto:deepthysugesh at gmail.com>
>         <mailto:deepthysugesh at gmail.com
>         <mailto:deepthysugesh at gmail.com>>> wrote:
>
>             Thank you Roi for your help!
>
>             On Mon, Jul 10, 2017 at 4:57 AM, Roi Dayan
>         <roid at mellanox.com <mailto:roid at mellanox.com>
>             <mailto:roid at mellanox.com <mailto:roid at mellanox.com>>> wrote:
>
>
>
>                 On 07/07/2017 17:36, Sugu Chandran wrote:
>
>                     Hi,
>
>                     I am trying to test hardware offloading feature in
>         OVS using
>                     a 2*25G
>                     mellanox NIC.   My test setup has static OVS L2 rules to
>                     forward packets
>                     between these two ports. The traffic generators are
>                     connected to these
>                     ports to pump in traffic.
>                     The hardware offloading is enabled in the system by
>         using,
>                         ovs-vsctl --no-wait set Open_vSwitch .
>                     other_config:hw-offload=true
>                     I didnt set any hw-policy explicit,  as I kept it
>         default as
>                     'None'
>
>                     I noticed that when I am sending traffic to these ports,
>                     there are no
>                     rules that are getting programmed into the hardware.
>         Also
>                     there are no
>                     error reported in ovs-vswitchd.log as such.
>                     Of Course the packets are getting forwarded in
>         software.  Is
>                     there
>                     anything else needs to be done to make the TC for
>                     programming the
>                     mellanox NICs?
>
>                     Regards
>                     _Sugu
>
>
>
>                 Hi Sugo,
>
>                 Since you do not have errors in the log did you check if
>         the rules
>                 were added to tc software?
>                 you can dump like this:
>                 # tc -s filter show dev ens5f0 ingress
>
>             I dont see any rules that are configured with above tc dump.
>
>
>     then nothing went to the HCA because even if the HW doesn't
>     support it the rule should be in tc software.
>
> [Sugesh] Yes thats right.
>
>
>
>
>                 You need to enable the offload feature on the HCA with
>         ethtool.
>                 example:
>                 # ethtool -K ens5f0 hw-tc-offload on
>
>             This is enabled .
>
>             I am trying to forward traffic between two PFs on the same NIC?
>             Does it supported in the offload implementation?
>
>
>     offload between PF ports is currently not supported.
>     only PF and its VFs.
>
> [Sugu]
> Ok. I am trying to do the traffic forwarding between PF and VFs . But no
> luck so far.
>
>
>
>
>             When creating the switchdev on PFs with 2 VFs, there is no VF
>             netdevs are populated in my system. They are still showing
>         as the
>             vfs under the PF.
>             Ofcourse there are no errors too.
>
>             Also the system reports the mode 'inline-mode transport'is
>         unsupported.
>             I am using ubunutu 17.04 with 4.10 kernel.
>             Is there anything I am missing here?
>             Any help is really appreciated!.
>
>         [Sugu] Some more details on this. I was really getting error
>         when trying
>         to enable hw-offload on mlnx-4 NICs.
>         Didnt notice in the logs before.
>
>         This the error info that I got from mellanox git.
>
>         BAD_SYS_STATE | 0x368B01 | query_vport_counter: vport is not enabled
>         (INIT_HCA is required)
>
>
>     executing which command raised this error?
>
> [Sugu] I upgraded the system and now I dont see this error anymore.
> Instead I see this
>
> [ 1103.216355] mlx5_3:wait_for_async_commands:722:(pid 3097): done with
> all pending requests
> [ 1115.954770] mlx5_core 0000:07:00.0: mlx5_cmd_check:697:(pid 3477):
> QUERY_VPORT_COUNTER(0x770) op_mod(0x0) failed, status bad system
> state(0x4), syndrome (0x368b01)
> [ 1115.954902] mlx5_core 0000:07:00.0: mlx5_cmd_check:697:(pid 3477):
> QUERY_VPORT_COUNTER(0x770) op_mod(0x0) failed, status bad system
> state(0x4), syndrome (0x368b01)
>
> I am getting this error back to back for every command(2 entry for each
> command as I have 2 VFs, may be?)
> starting from unbind, devlink, ethtool and starting the VM.
> And inside the VM the VFs are not bound to any driver either. Is there
> any wrong with the NIC?


looks like the syndrome you get is caused by querying a counter while
the HCA is not yes configured properly.
can you verify you are using the latest firmware?
can you verify the steps you do? did you enable sriov and moved to
switchdev mode?

>
>
>
>
>
>
>         I verfied that the ports named eth1, eth2, eth3 and et4 are
>         created for
>         my vfs, when
>         I ran the commands 'devlink dev eswitch set pci/0000:07:00.0 mode
>         switchdev' and
>         'devlink dev eswitch set pci/0000:07:00.1 mode switchdev'
>
>         The detailed error in dmesg are given below,
>         [ 1245.941287] mlx5_core 0000:07:00.0: mlx5_cmd_check:697:(pid
>         3107):
>         QUERY_VPORT_COUNTER(0x770) op_mod(0x0) failed, status bad system
>         state(0x4), syndrome (0x368b01)
>         [ 1245.941478] mlx5_core 0000:07:00.1: mlx5_cmd_check:697:(pid
>         3107):
>         QUERY_VPORT_COUNTER(0x770) op_mod(0x0) failed, status bad system
>         state(0x4), syndrome (0x368b01)
>
>         Please note I couldn't run the "inline-mode transport" command
>         as its
>         not supported.
>
>
>     maybe you need newer iproute package. try to install latest upstream.
>
> [Sugu]
> I am using latest Ubuntu release
>>>>
>
> No LSB modules are available.
> Distributor ID: Ubuntu
> Description:    Ubuntu Artful Aardvark (development branch)
> Release:        17.10
> Codename:       artful
>>>>>>
> and my kernel is
> 4.11.0-10-generic #15-Ubuntu SMP Thu Jun 29 15:03:41 UTC 2017 x86_64
> x86_64 x86_64 GNU/Linux
>
> And still it need to install the newer iproute package additonally? Is
> that the requirement to use the hardware offload in OVS?
> And my iproute version is
> ip -V
> ip utility, iproute2-ss161212
> Can you share which version of iproute you use for the testing?

I'm using latest upstream. I'm not sure if all needed patches are in
Ubuntu distro.
my versions looks like this: ip utility, iproute2-ss170501

if you have devlink and you can change mode to switchdev without an
error then it's ok to start going.


>
>
>
>
>
>                 We still need to work on docs for this feature but for now I
>                 documented it a little here:
>                 https://github.com/roidayan/ovs/wiki
>         <https://emea01.safelinks.protection.outlook.com/?url=https%3A%2F%2Fgithub.com%2Froidayan%2Fovs%2Fwiki&data=02%7C01%7Croid%40mellanox.com%7C2bbfa311e8124ec5ded508d4d212e423%7Ca652971c7d2e4d9ba6a4d149256f461b%7C0%7C0%7C636364425969048836&sdata=C1Hc08dwe3cjYKIgUyNeCbHI%2FnuZlITuPhPpdyZYyME%3D&reserved=0>
>
>         <https://emea01.safelinks.protection.outlook.com/?url=https%3A%2F%2Fgithub.com%2Froidayan%2Fovs%2Fwiki&data=02%7C01%7Croid%40mellanox.com%7C56f73b8b334b4413dd3608d4c84feee7%7Ca652971c7d2e4d9ba6a4d149256f461b%7C0%7C0%7C636353693008610668&sdata=3orv%2FK9Diwoj5pMQAuBmRHF5QRuxNlwmZgOa3f1AaTE%3D&reserved=0
>         <https://emea01.safelinks.protection.outlook.com/?url=https%3A%2F%2Fgithub.com%2Froidayan%2Fovs%2Fwiki&data=02%7C01%7Croid%40mellanox.com%7C56f73b8b334b4413dd3608d4c84feee7%7Ca652971c7d2e4d9ba6a4d149256f461b%7C0%7C0%7C636353693008610668&sdata=3orv%2FK9Diwoj5pMQAuBmRHF5QRuxNlwmZgOa3f1AaTE%3D&reserved=0>>
>
>             As suggested in the wiki,
>
>
>
>                 Thanks,
>                 Roi
>
>
>
>
>                     _______________________________________________
>                     discuss mailing list
>                     discuss at openvswitch.org
>         <mailto:discuss at openvswitch.org> <mailto:discuss at openvswitch.org
>         <mailto:discuss at openvswitch.org>>
>
>         https://emea01.safelinks.protection.outlook.com/?url=https%3A%2F%2Fmail.openvswitch.org%2Fmailman%2Flistinfo%2Fovs-discuss&data=02%7C01%7Croid%40mellanox.com%7Cb226a368b9814cdc87ce08d4c5530730%7Ca652971c7d2e4d9ba6a4d149256f461b%7C0%7C0%7C636350407766292115&sdata=9mMWoehygP7%2BmftGsOuyynyaHnYx%2FKQzka7gedr1%2FUE%3D&reserved=0
>         <https://emea01.safelinks.protection.outlook.com/?url=https%3A%2F%2Fmail.openvswitch.org%2Fmailman%2Flistinfo%2Fovs-discuss&data=02%7C01%7Croid%40mellanox.com%7Cb226a368b9814cdc87ce08d4c5530730%7Ca652971c7d2e4d9ba6a4d149256f461b%7C0%7C0%7C636350407766292115&sdata=9mMWoehygP7%2BmftGsOuyynyaHnYx%2FKQzka7gedr1%2FUE%3D&reserved=0>
>
>         <https://emea01.safelinks.protection.outlook.com/?url=https%3A%2F%2Fmail.openvswitch.org%2Fmailman%2Flistinfo%2Fovs-discuss&data=02%7C01%7Croid%40mellanox.com%7Cb226a368b9814cdc87ce08d4c5530730%7Ca652971c7d2e4d9ba6a4d149256f461b%7C0%7C0%7C636350407766292115&sdata=9mMWoehygP7%2BmftGsOuyynyaHnYx%2FKQzka7gedr1%2FUE%3D&reserved=0
>         <https://emea01.safelinks.protection.outlook.com/?url=https%3A%2F%2Fmail.openvswitch.org%2Fmailman%2Flistinfo%2Fovs-discuss&data=02%7C01%7Croid%40mellanox.com%7Cb226a368b9814cdc87ce08d4c5530730%7Ca652971c7d2e4d9ba6a4d149256f461b%7C0%7C0%7C636350407766292115&sdata=9mMWoehygP7%2BmftGsOuyynyaHnYx%2FKQzka7gedr1%2FUE%3D&reserved=0>>
>
>
>
>


More information about the discuss mailing list