[ovs-dev] [dpdk-howl PATCH v5 1/2] netdev-dpdk: Upgrade to dpdk v18.08

Eelco Chaudron echaudro at redhat.com
Thu Oct 25 09:15:22 UTC 2018



On 25 Oct 2018, at 11:02, Ophir Munk wrote:

> Hi Eelco,
> Please find comments inline
>
>> -----Original Message-----
>> From: Eelco Chaudron [mailto:echaudro at redhat.com]
>> Sent: Wednesday, October 24, 2018 1:41 PM
>> To: Ophir Munk <ophirmu at mellanox.com>
>> Cc: Thomas Monjalon <thomas at monjalon.net>; ovs-dev at openvswitch.org;
>> Asaf Penso <asafp at mellanox.com>; Shahaf Shuler
>> <shahafs at mellanox.com>
>> Subject: Re: [ovs-dev] [dpdk-howl PATCH v5 1/2] netdev-dpdk: Upgrade 
>> to
>> dpdk v18.08
>>
>> Hi Ophir,
>>
>> Did not see any response on my comments below, is this another 
>> mailing list
>> issue you explained?
>>
>
> V6 is expected soon. There is no mailing list issue.

Thanks for the response, will review v6 once it’s out.

>
>> //Eelco
>>
>> On 12 Oct 2018, at 10:56, Eelco Chaudron wrote:
>>
>>>> @@ -3043,13 +3054,18 @@ netdev_dpdk_get_status(const struct netdev
>>>> *netdev, struct smap *args)
>>>>      smap_add_format(args, "if_descr", "%s %s", rte_version(),
>>>>
>>>> dev_info.driver_name);
>>>>
>>>> -    if (dev_info.pci_dev) {
>>>> -        smap_add_format(args, "pci-vendor_id", "0x%x",
>>>> -                        dev_info.pci_dev->id.vendor_id);
>>>> -        smap_add_format(args, "pci-device_id", "0x%x",
>>>> -                        dev_info.pci_dev->id.device_id);
>>>> +    const struct rte_bus *bus;
>>>> +    const struct rte_pci_device *pci_dev;
>>>
>>> Don’t we need to take the ovs_mutex_lock(&dev->mutex) lock here, 
>>> we
>>> are calling DPDK code?
>
> There is no dev access in the added code. Therefore should use 
> dpdk_mutex rather
> than dev->mutex.
> Will update in v6
>
>>>
>>>> +    bus = rte_bus_find_by_device(dev_info.device);
>>>> +    if (bus && !strcmp(bus->name, "pci")) {
>>>> +        pci_dev = RTE_DEV_TO_PCI(dev_info.device);
>>>> +        if (pci_dev) {
>>>> +            smap_add_format(args, "pci-vendor_id", "0x%x",
>>>> +                            pci_dev->id.vendor_id);
>>>> +            smap_add_format(args, "pci-device_id", "0x%x",
>>>> +                            pci_dev->id.device_id);
>>>> +        }
>>>>      }
>>>> -
>>>>      return 0;
>>>>  }
>>>>
>>>> dump_flow_pattern(struct rte_flow_item *item)
>>>>
>>>>          VLOG_DBG("rte flow vlan pattern:\n");
>>>>          if (vlan_spec) {
>>>> -            VLOG_DBG("  Spec: tpid=0x%"PRIx16", 
>>>> tci=0x%"PRIx16"\n",
>>>> -                     ntohs(vlan_spec->tpid), 
>>>> ntohs(vlan_spec->tci));
>>>> +            VLOG_DBG("  Spec: inner_type=0x%"PRIx16",
>>>> tci=0x%"PRIx16"\n",
>>>> +                     ntohs(vlan_spec->inner_type),
>>>> ntohs(vlan_spec->tci));
>>>>          } else {
>>>>              VLOG_DBG("  Spec = null\n");
>>>>          }
>>>>
>>>>          if (vlan_mask) {
>>>> -            VLOG_DBG("  Mask: tpid=0x%"PRIx16", 
>>>> tci=0x%"PRIx16"\n",
>>>> -                     vlan_mask->tpid, vlan_mask->tci);
>>>> +            VLOG_DBG("  Mask: inner_type=0x%"PRIx16",
>>>> tci=0x%"PRIx16"\n",
>>>> +                     vlan_mask->inner_type, vlan_mask->tci);
>>>
>>> Should the vlan_mask also use htons()?
>>>
>
> It seems so as both vlan_spec and vlan_mask are of the same type and 
> have Big Endian fields .
> This patch only renamed the field tpid ==> inner_type so not using 
> htons() was already present in 17.11.
> Will update in v6.
>
>>>>          } else {
>>>>              VLOG_DBG("  Mask = null\n");
>>>>          }
>>>> +
>>>> +    rss_data = xmalloc(sizeof(struct action_rss_data) +
>>>> +                       sizeof(uint16_t) * netdev->n_rxq);
>>>> +    *rss_data = (struct action_rss_data) {
>>>> +        .conf = (struct rte_flow_action_rss) {
>>>> +            .func = RTE_ETH_HASH_FUNCTION_DEFAULT,
>>>> +            .level = 0,
>>>> +            .types = ETH_RSS_IP,
>>>> +            .key_len = 0,
>>>> +            .queue_num = netdev->n_rxq,
>>>> +            .queue = rss_data->queue,
>>>> +            .key  = NULL
>>>
>>> If you have them in a different order than the structure, you might 
>>> as
>>> well group key_len and key together.
>>>> +        },
>>>> +    };
>>>>
>
> Agreed. Will group key_len and key in v6


More information about the dev mailing list