[ovs-discuss] LXC, DPDK, OVS and MLX4

Flavio Leitner fbl at sysclose.org
Tue Feb 16 12:12:41 UTC 2016


On Tue, 16 Feb 2016 14:44:38 +1100
Alexander Turner <me at alexturner.co> wrote:

> Acronyms galore!
> 
> Sent a similar message to OVDK before realising the ML was dead (or is it)
> 
> I'm trying to get DPDK+OVS working with an MLX4 card (ConnectX-3) using the
> PMD. I've got the interfaces and bridge up and running through I'm not sure
> how to tie a LXC interface into the DPDK bridge. Creating a regular veth
> pair between the host and container namespace limits me to a throughput of
> 1.3Mbps.
> 
> Architecturally, I have two LXC containers connected to ovsbr0. I'm
> forwarding all packets on LXC-CT1 to 40GB-NIC-1 and LXC-CT2 to 40GB-NIC-2
> respectively. Both nics are connected to a 40Gbps switch thats forwarding
> between the two ports.
> 
> My question - obviously the namespace veth isn't working v well, what
> interface should I be using to tie into an LXC container?

Without DPDK, the packets are pulled from NIC by the kernel, pass the OVS
datapath and then are forwarded to veth interfaces (in your use-case), so
all that happens in one context, kernel context.

With DPDK, the packets are pulled from the NIC by an OVS thread (PMD thread)
and they go directly to OVS in userspace, so the packets bypass the kernel
completely.

In your use-case, you're mixing both userspace (DPDK) and kernel (veth) which
forces OVS to push packet by packet from one context to another and that has
a huge cost.  The performance is horrible as a consequence.

One possible solution is using KNI[1] but keep in mind that uses an external
kernel driver which in some cases is a no-go.

[1] http://dpdk.org/doc/guides/prog_guide/kernel_nic_interface.html
-- 
fbl




More information about the discuss mailing list