[ovs-dev] [RFC PATCH v2 0/1] netdev-dpdk: multi-segment mbuf jumbo frame support

Chandran, Sugesh sugesh.chandran at intel.com
Fri May 26 19:06:22 UTC 2017


Hi Mark, 


Thank you for working on this!
For some reason it was failing for me while trying to apply the first set of patches from Michael.
I tried the latest patches from the patchwork.

Here are few high level comments as below.


Regards
_Sugesh


> -----Original Message-----
> From: ovs-dev-bounces at openvswitch.org [mailto:ovs-dev-
> bounces at openvswitch.org] On Behalf Of Mark Kavanagh
> Sent: Monday, May 15, 2017 11:17 AM
> To: ovs-dev at openvswitch.org; qiudayu at chinac.com
> Subject: [ovs-dev] [RFC PATCH v2 0/1] netdev-dpdk: multi-segment mbuf
> jumbo frame support
> 
> This RFC introduces an approach for implementing jumbo frame support for
> OvS-DPDK with multi-segment mbufs.
> 
> == Overview ==
> Currently, jumbo frame support for OvS-DPDK is implemented by increasing
> the size of mbufs within a mempool, such that each mbuf within the pool is
> large enough to contain an entire jumbo frame of a user-defined size.
> Typically, for each user-defined MTU 'requested_mtu', a new mempool is
> created, containing mbufs of size ~requested_mtu.
> 
> With the multi-segment approach, all ports share the same mempool, in
> which each mbuf is of standard/default size (~2k MB). To accommodate
> jumbo frames, mbufs may be chained together, each mbuf storing a portion
> of the jumbo frame; each mbuf in the chain is termed a segment, hence the
> name.
> 
> 
> == Enabling multi-segment mbufs ==
> Multi-segment and single-segment mbufs are mutually exclusive, and the
> user must decide on which approach to adopt on init. The introduction of a
> new optional OVSDB field, 'dpdk-multi-seg-mbufs', facilitates this; this is a
> boolean field, which defaults to false. Setting the field is identical to setting
> existing DPDPK-specific OVSDB fields:
> 
>     sudo $OVS_DIR/utilities/ovs-vsctl --no-wait set Open_vSwitch .
> other_config:dpdk-init=true
>     sudo $OVS_DIR/utilities/ovs-vsctl --no-wait set Open_vSwitch .
> other_config:dpdk-lcore-mask=0x10
>     sudo $OVS_DIR/utilities/ovs-vsctl --no-wait set Open_vSwitch .
> other_config:dpdk-socket-mem=4096,0
> ==> sudo $OVS_DIR/utilities/ovs-vsctl --no-wait set Open_vSwitch .
> other_config:dpdk-multi-seg-mbufs=true
> 
[Sugesh] May be I am missing something here. Why do we need configuration option to
enable the multi segment. If the MTU is larger than the mbuf size, it will automatically create
chained mbufs. Otherwise it uses the normal single mbufs. We will keep it enable by default.
Are you going to support jumbo frames with larger mbuf (when dpdk-multi-seg-mbufs=false)
and also with chained mbufs(dpdk-multi-seg-mbufs = True).


> 
> == Code base ==
> This patch is dependent on the multi-segment mbuf patch submitted by
> Michael Qiu (currently V2): https://mail.openvswitch.org/pipermail/ovs-
> dev/2017-May/331792.html.
> The upstream base commit against which this patch was generated is
> 1e96502; to test this patch, check out that branch, apply Michael's patchset,
> and then apply this patch:
> 
>     3.  netdev-dpdk: enable multi-segment jumbo frames
>     2.  DPDK multi-segment mbuf support (Michael Qiu)
>     1.  1e96502 tests: Only run python SSL test if SSL support is configur... (OvS
> upstream)
> 
> The DPDK version used during testing is v17.02, although v16.11 should work
> equally well.
> 
> 
> == Testing ==
> As this is an RFC, only a subset of the total traffic paths/vSwitch
> configurations/actions have been tested - a summary of traffic paths tested
> thus far is included below. The action tested in all cases is OUTPUT. Tests in
> which issues were observed are summarized beneath the table.
> 
> +-------------------------------------------------------------------------------------+
> |  Traffic Path                                                                       |
> +-------------------------------------------------------------------------------------+
> | DPDK Phy 0   -> OvS -> DPDK Phy 1                                                   |
> | DPDK Phy 0   -> OvS -> Kernel Phy 0                                             [1] |
> | Kernel Phy 0 -> OvS -> DPDK Phy 0                                                   |
> |                                                                                     |
> | DPDK Phy 0   -> OvS -> vHost User 0 -> vHost User 1 -> OvS -> DPDK Phy 1 *
> |
> | DPDK Phy 0   -> OvS -> vHost User 0 -> vHost User 1 -> OvS -> Kernel Phy 0 *
> [1] |
> | Kernel Phy 0 -> OvS -> vHost User 1 -> vHost User 0 -> OvS -> -> DPDK Phy 0
> *   [2] |
> |                                                                                     |
> | vHost0       -> OvS -> vHost1                                                       |
> +-------------------------------------------------------------------------------------+
> 
>   * = guest kernel IP forwarding
> [1] = incorrect L4 checksum
> [2] = traffic not forwarded in guest kernel. This behaviour is also observed on
> OvS master.
> 
> _______________________________________________
> dev mailing list
> dev at openvswitch.org
> https://mail.openvswitch.org/mailman/listinfo/ovs-dev


More information about the dev mailing list