[ovs-dev] [PATCH v8 08/13] dp-packet: Handle multi-seg mubfs in shift() func.

Eelco Chaudron echaudro at redhat.com
Tue Jun 26 12:28:28 UTC 2018



On 22 Jun 2018, at 21:04, Lam, Tiago wrote:

> On 18/06/2018 12:55, Eelco Chaudron wrote:
>>
>>
>> On 11 Jun 2018, at 18:21, Tiago Lam wrote:
>>
>>> In its current implementation dp_packet_shift() is also unaware of
>>> multi-seg mbufs (that holds data in memory non-contiguously) and
>>> assumes
>>> that data exists contiguously in memory, memmove'ing data to perform
>>> the
>>> shift.
>>>
>>> To add support for multi-seg mbuds a new set of functions was
>>> introduced, dp_packet_mbuf_shift() and dp_packet_mbuf_write(). These
>>> functions are used by dp_packet_shift(), when handling multi-seg
>>> mbufs,
>>> to shift and write data within a chain of mbufs.
>>>
>>> Signed-off-by: Tiago Lam <tiago.lam at intel.com>
>>> ---
>>>  lib/dp-packet.c | 102
>>> ++++++++++++++++++++++++++++++++++++++++++++++++++++++++
>>>  lib/dp-packet.h |   8 +++++
>>>  2 files changed, 110 insertions(+)
>>>
>>> diff --git a/lib/dp-packet.c b/lib/dp-packet.c
>>> index 9f8503e..399fadb 100644
>>> --- a/lib/dp-packet.c
>>> +++ b/lib/dp-packet.c
>>> @@ -294,6 +294,102 @@ dp_packet_prealloc_headroom(struct dp_packet 
>>> *b,
>>> size_t size)
>>>      }
>>>  }
>>>
>>> +#ifdef DPDK_NETDEV
>>> +/* Write len data bytes in a mbuf at specified offset.
>>> + *
>>> + * 'mbuf', pointer to the destination mbuf where 'ofs' is, and the
>>> mbuf where
>>> + * the data will first be written.
>>> + * 'ofs', the offset within the provided 'mbuf' where 'data' is to 
>>> be
>>> written.
>>> + * 'len', the size of the to be written 'data'.
>>> + * 'data', pointer to the to be written bytes.
>>> + *
>>> + * XXX: This function is the counterpart of the 
>>> `rte_pktmbuf_read()`
>>> function
>>> + * available with DPDK, in the rte_mbuf.h */
>>> +void
>>> +dp_packet_mbuf_write(struct rte_mbuf *mbuf, int16_t ofs, uint32_t
>>> len,
>>> +                     const void *data)
>>> +{
>>> +    char *dst_addr;
>>> +    uint16_t data_len;
>>> +    int len_copy;
>>> +    while (mbuf) {
>>> +        if (len == 0) {
>>> +            break;
>>> +        }
>>> +
>>> +        dst_addr = rte_pktmbuf_mtod_offset(mbuf, char *, ofs);
>>> +        data_len = MBUF_BUF_END(mbuf->buf_addr, mbuf->buf_len) -
>>> dst_addr;
>>> +
>>> +        len_copy = MIN(len, data_len);
>>> +        /* We don't know if 'data' is the result of a
>>> rte_pktmbuf_read() call,
>>> +         * in which case we may end up writing to the same region 
>>> of
>>> memory we
>>> +         * are reading from and overlapping. Hence the use of
>>> memmove() here */
>>> +        memmove(dst_addr, data, len_copy);
>>> +
>>> +        data = ((char *) data) + len_copy;
>>> +        len -= len_copy;
>>> +        ofs = 0;
>>> +
>>> +        mbuf = mbuf->next;
>>> +    }
>>> +}
>>> +
>>> +static void
>>> +dp_packet_mbuf_shift_(struct rte_mbuf *dbuf, int16_t dst_ofs,
>>> +                      const struct rte_mbuf *sbuf, uint16_t 
>>> src_ofs,
>>> int len)
>>> +{
>>> +    char rd[len];
>>> +    const char *wd = rte_pktmbuf_read(sbuf, src_ofs, len, rd);
>>> +
>>> +    ovs_assert(wd);
>>> +
>>
>> This implementation copies the entire packet to a temp buffer and 
>> then
>> copies it back. Would it be "faster" to do a direct memmove, starting
>> with the last/first segment depending on the direction?
>>
>
> If I understand the question correctly, that would assume that memory 
> is
> contiguous in memory, which isn't guaranteed with multi-segment mbufs.
> Hence why we read to a temp buffer and write all at once (when 
> writing,
> in dp_packet_prealloc_tailroom(), we do use memmove() within the same
> mbuf, where memory is contiguous).

Assuming the sbuf is continues, no memory is copied to rd, however if 
it’s not it’s copied, and we move memory twice.
I’m suggesting to optimise for this case, skip the extra copy, but not 
sure if this is worth the effort.


More information about the dev mailing list