[ovs-dev] [RFC v7 00/13] Support multi-segment mbufs

Lam, Tiago tiago.lam at intel.com
Thu Jun 7 15:34:46 UTC 2018


On 07/06/2018 13:48, Eelco Chaudron wrote:
> I'm planning on reviewing this patchset, but when I applied the patch to 
> master and tried to start OVS it crashed:
> 
> #0  eth_compose (b=b at entry=0x7ffebedf7b00, eth_dst=..., eth_src=..., 
> eth_type=<optimized out>, size=size at entry=0) at lib/packets.c:965
> #1  0x000000000074bfda in flow_compose (p=p at entry=0x7ffebedf7b00, 
> flow=flow at entry=0x7ffebedf7860, l7=l7 at entry=0x0, l7_len=l7_len at entry=64) 
> at lib/flow.c:2960
> #2  0x00000000006f2094 in check_ct_eventmask (backer=<optimized out>) at 
> ofproto/ofproto-dpif.c:1233
> #3  0x00000000006fb15b in check_support (backer=0x283bd20) at 
> ofproto/ofproto-dpif.c:1398
> #4  open_dpif_backer (backerp=0x283ac38, type=0x283a290 "netdev") at 
> ofproto/ofproto-dpif.c:788
> #5  construct (ofproto_=0x283a9f0) at ofproto/ofproto-dpif.c:1423
> #6  0x00000000006e5f05 in ofproto_create (datapath_name=0x283dee0 
> "ovs_pvp_br0", datapath_type=<optimized out>, 
> ofprotop=ofprotop at entry=0x283a3f8) at ofproto/ofproto.c:545
> #7  0x00000000006d7ec1 in bridge_reconfigure 
> (ovs_cfg=ovs_cfg at entry=0x2835530) at vswitchd/bridge.c:648
> #8  0x00000000006db406 in bridge_run () at vswitchd/bridge.c:3022
> 
> I traced it back to patch "RFC v7 05/13] dp-packet: Handle multi-seg 
> mbufs in helper funcs".
> My config is straightforward and the feature was not yet enabled:
> 
> $ ovs-vsctl show
> 31cca486-0451-4a51-90b8-43c11e0548e5
>      Bridge "ovs_pvp_br0"
>          Port "ovs_pvp_br0"
>              Interface "ovs_pvp_br0"
>                  type: internal
>          Port "dpdk0"
>              Interface "dpdk0"
>                  type: dpdk
>                  options: {dpdk-devargs="0000:05:00.0", n_rxq="2"}
>          Port "vhost0"
>              Interface "vhost0"
>                  type: dpdkvhostuserclient
>                  options: {n_rxq="2", vhost-server-path="/tmp/vhost-sock0"}
>          Port "dpdk1"
>              Interface "dpdk1"
>                  type: dpdk
>                  options: {dpdk-devargs="0000:05:00.1", n_rxq="2"}
>      ovs_version: "2.9.90"
> 
> My goal for reviewing this part was to get a better understanding of the 
> dp_packet layer, so rather than spending time before reporting, I 
> decided to send a reply right away.
> 
> 
> In addition are you planning on sending a v8 soon? If so I might delay 
> the reviewing a bit ;)
> 
> Cheers,
> 
> Eelco
> 

Thanks Eelco! Both for the report and tracing it back.

I've seen this as well, and should be fixed in the next iteration.
There's a bug in `dp_packet_clone_with_headroom()`, in dp-packet.c, and
when calculating the end address of an mbuf, which was leading to
copying data to invalid regions of memory.

I'm currently finishing with some testing and getting some performance
numbers to send out with the next iteration, together with addressing
both Ciara's and Ilya's comments. My plan is to send that out tomorrow
end of day or by next Monday tops. If you could have a look into the
next iteration instead that would be greatly appreciated.

Thanks again,
Tiago.


More information about the dev mailing list