[ovs-discuss] Issue with offloading OVS flows into Mellanox-4 cards

Roi Dayan roid at mellanox.com
Wed Jul 12 04:33:33 UTC 2017



On 11/07/2017 14:28, Sugu Deepthy wrote:
> Hi Roi
>
> On Tue, Jul 11, 2017 at 12:20 AM, Sugu Deepthy <deepthysugesh at gmail.com
> <mailto:deepthysugesh at gmail.com>> wrote:
>
>     Thank you Roi for your help!
>
>     On Mon, Jul 10, 2017 at 4:57 AM, Roi Dayan <roid at mellanox.com
>     <mailto:roid at mellanox.com>> wrote:
>
>
>
>         On 07/07/2017 17:36, Sugu Chandran wrote:
>
>             Hi,
>
>             I am trying to test hardware offloading feature in OVS using
>             a 2*25G
>             mellanox NIC.   My test setup has static OVS L2 rules to
>             forward packets
>             between these two ports. The traffic generators are
>             connected to these
>             ports to pump in traffic.
>             The hardware offloading is enabled in the system by using,
>                 ovs-vsctl --no-wait set Open_vSwitch .
>             other_config:hw-offload=true
>             I didnt set any hw-policy explicit,  as I kept it default as
>             'None'
>
>             I noticed that when I am sending traffic to these ports,
>             there are no
>             rules that are getting programmed into the hardware. Also
>             there are no
>             error reported in ovs-vswitchd.log as such.
>             Of Course the packets are getting forwarded in software.  Is
>             there
>             anything else needs to be done to make the TC for
>             programming the
>             mellanox NICs?
>
>             Regards
>             _Sugu
>
>
>
>         Hi Sugo,
>
>         Since you do not have errors in the log did you check if the rules
>         were added to tc software?
>         you can dump like this:
>         # tc -s filter show dev ens5f0 ingress
>
>     I dont see any rules that are configured with above tc dump.
>

then nothing went to the HCA because even if the HW doesn't
support it the rule should be in tc software.

>
>
>         You need to enable the offload feature on the HCA with ethtool.
>         example:
>         # ethtool -K ens5f0 hw-tc-offload on
>
>     This is enabled .
>
>     I am trying to forward traffic between two PFs on the same NIC?
>     Does it supported in the offload implementation?

offload between PF ports is currently not supported.
only PF and its VFs.


>     When creating the switchdev on PFs with 2 VFs, there is no VF
>     netdevs are populated in my system. They are still showing as the
>     vfs under the PF.
>     Ofcourse there are no errors too.
>
>     Also the system reports the mode 'inline-mode transport'is unsupported.
>     I am using ubunutu 17.04 with 4.10 kernel.
>     Is there anything I am missing here?
>     Any help is really appreciated!.
>
> [Sugu] Some more details on this. I was really getting error when trying
> to enable hw-offload on mlnx-4 NICs.
> Didnt notice in the logs before.
>
> This the error info that I got from mellanox git.
>
> BAD_SYS_STATE | 0x368B01 | query_vport_counter: vport is not enabled
> (INIT_HCA is required)

executing which command raised this error?

>
> I verfied that the ports named eth1, eth2, eth3 and et4 are created for
> my vfs, when
> I ran the commands 'devlink dev eswitch set pci/0000:07:00.0 mode
> switchdev' and
> 'devlink dev eswitch set pci/0000:07:00.1 mode switchdev'
>
> The detailed error in dmesg are given below,
> [ 1245.941287] mlx5_core 0000:07:00.0: mlx5_cmd_check:697:(pid 3107):
> QUERY_VPORT_COUNTER(0x770) op_mod(0x0) failed, status bad system
> state(0x4), syndrome (0x368b01)
> [ 1245.941478] mlx5_core 0000:07:00.1: mlx5_cmd_check:697:(pid 3107):
> QUERY_VPORT_COUNTER(0x770) op_mod(0x0) failed, status bad system
> state(0x4), syndrome (0x368b01)
>
> Please note I couldn't run the "inline-mode transport" command as its
> not supported.
>

maybe you need newer iproute package. try to install latest upstream.

>
>
>         We still need to work on docs for this feature but for now I
>         documented it a little here:
>         https://github.com/roidayan/ovs/wiki
>         <https://emea01.safelinks.protection.outlook.com/?url=https%3A%2F%2Fgithub.com%2Froidayan%2Fovs%2Fwiki&data=02%7C01%7Croid%40mellanox.com%7C56f73b8b334b4413dd3608d4c84feee7%7Ca652971c7d2e4d9ba6a4d149256f461b%7C0%7C0%7C636353693008610668&sdata=3orv%2FK9Diwoj5pMQAuBmRHF5QRuxNlwmZgOa3f1AaTE%3D&reserved=0>
>
>     As suggested in the wiki,
>
>
>
>         Thanks,
>         Roi
>
>
>
>
>             _______________________________________________
>             discuss mailing list
>             discuss at openvswitch.org <mailto:discuss at openvswitch.org>
>             https://emea01.safelinks.protection.outlook.com/?url=https%3A%2F%2Fmail.openvswitch.org%2Fmailman%2Flistinfo%2Fovs-discuss&data=02%7C01%7Croid%40mellanox.com%7Cb226a368b9814cdc87ce08d4c5530730%7Ca652971c7d2e4d9ba6a4d149256f461b%7C0%7C0%7C636350407766292115&sdata=9mMWoehygP7%2BmftGsOuyynyaHnYx%2FKQzka7gedr1%2FUE%3D&reserved=0
>             <https://emea01.safelinks.protection.outlook.com/?url=https%3A%2F%2Fmail.openvswitch.org%2Fmailman%2Flistinfo%2Fovs-discuss&data=02%7C01%7Croid%40mellanox.com%7Cb226a368b9814cdc87ce08d4c5530730%7Ca652971c7d2e4d9ba6a4d149256f461b%7C0%7C0%7C636350407766292115&sdata=9mMWoehygP7%2BmftGsOuyynyaHnYx%2FKQzka7gedr1%2FUE%3D&reserved=0>
>
>
>


More information about the discuss mailing list