[ovs-dev] [PATCH v3 1/5] netdev-dpdk: fix mempool management with vhu client.

Kevin Traynor ktraynor at redhat.com
Wed Oct 11 10:40:27 UTC 2017


On 10/11/2017 11:28 AM, Fischetti, Antonio wrote:
> Hi Robert,
> that's happening because the requested MTU is rounded up to the current boundary.
> So if the current upper boundary is 2500 and we request 2000 => 
> 2000 is rounded up to 2500 and the same mempool is returned.
> 
> I may be wrong but this seems the wanted behavior, maybe Kevin can shed some light?
> I may have missed some detail as I didn't follow this implementation since the 
> very beginning.
> 

I think it's related to review comments I sent earlier today. mtu is
mtu, but a value that is rounded from it is used in calculating the size
of the mbuf. I suspect in this case, when the new mtu size results in
the same rounded value, the current mempool is being reused (which is
fine) but the EEXISTS error value returned from reconfigure means that
the change is not seen as successful and the old mtu value is restored
to ovsdb.

Kevin.

> Antonio
> 
>> -----Original Message-----
>> From: ovs-dev-bounces at openvswitch.org [mailto:ovs-dev-bounces at openvswitch.org]
>> On Behalf Of Fischetti, Antonio
>> Sent: Wednesday, October 11, 2017 9:04 AM
>> To: Róbert Mulik <robert.mulik at ericsson.com>; dev at openvswitch.org
>> Subject: Re: [ovs-dev] [PATCH v3 1/5] netdev-dpdk: fix mempool management with
>> vhu client.
>>
>> Thanks Robert for reporting this and for all the clear details you provided.
>> I'll look into this and get back to you.
>>
>> Antonio
>>
>>> -----Original Message-----
>>> From: Róbert Mulik [mailto:robert.mulik at ericsson.com]
>>> Sent: Tuesday, October 10, 2017 4:19 PM
>>> To: Fischetti, Antonio <antonio.fischetti at intel.com>; dev at openvswitch.org
>>> Subject: RE: [ovs-dev] [PATCH v3 1/5] netdev-dpdk: fix mempool management
>> with
>>> vhu client.
>>>
>>> Hi Antonio,
>>>
>>> Last week I run into this mempool issue during the development of a new
>>> feature. I have made a bugfix, but then we saw yours too, so I tested if it
>>> solves my problem. It did, but I realized another problem with it. The
>> mempool
>>> name generation is partly based on the MTU size, which is handled in 1024
>> bytes
>>> long ranges. For example MTU 1000 and 1500 are in the same range, 2000 and
>> 2500
>>> are in a different range. So I tested this patch and got the following.
>>>
>>> # ovs-vsctl list interface dpdk0 |grep mtu
>>> mtu                 : 2500
>>> mtu_request         : 2500
>>> # ovs-vsctl set interface dpdk0 mtu_request=2000
>>> # ovs-vsctl list interface dpdk0 |grep mtu
>>> mtu                 : 2500
>>> mtu_request         : 2000
>>> # ovs-vsctl set interface dpdk0 mtu_request=1500
>>> # ovs-vsctl list interface dpdk0 |grep mtu
>>> mtu                 : 1500
>>> mtu_request         : 1500
>>> # ovs-vsctl set interface dpdk0 mtu_request=1000
>>> # ovs-vsctl list interface dpdk0 |grep mtu
>>> mtu                 : 1500
>>> mtu_request         : 1000
>>> # ovs-vsctl set interface dpdk0 mtu_request=2000
>>> # ovs-vsctl list interface dpdk0 |grep mtu
>>> mtu                 : 2000
>>> mtu_request         : 2000
>>> # ovs-vsctl set interface dpdk0 mtu_request=1000
>>> # ovs-vsctl list interface dpdk0 |grep mtu
>>> mtu                 : 1000
>>> mtu_request         : 1000
>>> # ovs-vsctl set interface dpdk0 mtu_request=1500
>>> # ovs-vsctl list interface dpdk0 |grep mtu
>>> mtu                 : 1000
>>> mtu_request         : 1500
>>> # service openvswitch-switch restart
>>> # ovs-vsctl list interface dpdk0 |grep mtu
>>> mtu                 : 1500
>>> mtu_request         : 1500
>>>
>>>
>>> This was my setup:
>>>     Bridge br-prv
>>>         Port bond-prv
>>>             Interface "dpdk0"
>>>                 type: dpdk
>>>                 options: {dpdk-devargs="0000:05:00.0", n_rxq_desc="1024",
>>> n_txq_desc="1024"}
>>>     ovs_version: "2.8.90"
>>>
>>> And I used DPDK v17.08.
>>>
>>>
>>> So, as it can be see from the example above, with the patch applied when a
>> new
>>> mtu_request is in the same range as the previously set MTU, then it has no
>>> effect until service restart. The mtu_request has immediate effect when it is
>>> in different range as the previously set MTU. Or did I miss something during
>>> the testing?
>>>
>>> My patch what I used last week does the following. During reconfiguration the
>>> mempool is always deleted before a new one is created. It solved the problem
>>> without side effects, but it is not optimized (always recreates the mempool
>>> when this function is called).
>>> diff --git a/lib/netdev-dpdk.c b/lib/netdev-dpdk.c
>>> index c60f46f..de38f95 100644
>>> --- a/lib/netdev-dpdk.c
>>> +++ b/lib/netdev-dpdk.c
>>> @@ -621,6 +621,7 @@ netdev_dpdk_mempool_configure(struct netdev_dpdk *dev)
>>>      uint32_t buf_size = dpdk_buf_size(dev->requested_mtu);
>>>      struct dpdk_mp *mp;
>>>
>>> +    dpdk_mp_put(dev->dpdk_mp);
>>>      mp = dpdk_mp_get(dev, FRAME_LEN_TO_MTU(buf_size));
>>>      if (!mp) {
>>>          VLOG_ERR("Failed to create memory pool for netdev "
>>> @@ -629,7 +630,6 @@ netdev_dpdk_mempool_configure(struct netdev_dpdk *dev)
>>>                   rte_strerror(rte_errno));
>>>          return rte_errno;
>>>      } else {
>>> -        dpdk_mp_put(dev->dpdk_mp);
>>>          dev->dpdk_mp = mp;
>>>          dev->mtu = dev->requested_mtu;
>>>          dev->socket_id = dev->requested_socket_id;
>>>
>>>
>>> What do you think about this solution?
>>>
>>> Regards,
>>> Robert
>> _______________________________________________
>> dev mailing list
>> dev at openvswitch.org
>> https://mail.openvswitch.org/mailman/listinfo/ovs-dev



More information about the dev mailing list