[ovs-dev] dpdk vhost-user patch questions.

Gray, Mark D mark.d.gray at intel.com
Wed Jun 3 22:09:44 UTC 2015


Hi Ethan

> 
> I have some general questions about the vhost user feature which aren't
> totally clear to me.  I would greatly appreciate some clarification, or pointers
> to documentation.

Have you watched the presentation that Kevin and Maryam did at the conference
last year? It describes vhost-cuse pretty well. The only major difference between
cuse and user is the use of the socket for signaling. The datapath is basically the same.

> 
> 1) Since vhost-user is using standard Qemu interfaces, will it support all of the
> features of the standard vhost implementation?  I.E. can you do things like
> live migration on a VM which is using this?  My understanding is you can't
> with ivshmem.

Yes, although I think live migration is currently broken in DPDK. There is a
patch that needs to be upstreamed.

Currently, you can't live migrate with ivshmem. That’s not to say it couldn’t be done
but there are challenges. With Ivshmem, you create all the data structures (rings
and buffer pools) for all the vms in the host memory space in one or more shared pages and
then you share it *up* to the guest. With vhost, it's kind of the opposite. You
create all the data structures for an individual vm in that vm's memory space. Then
you share the memory *down* to the host. So if you need to live migrate
with vhost, it's easier as you just need to migrate the guest memory, which
you have to do anyway as it contains the state of the guest operating system.
With ivshmem, it’s a little trickier as all the data structures for all the vms are
in the same page (or pages) in the host. Therefore, you would need to figure
out which were the correct ones to migrate. Also, migrating pages that are 
really part of the host memory space (but shared up) is not a normal migration
operation. 

> 
> 2) What are the security implications of vhost-user?  Can a miss-behaving
> guest crash the vswitch?  Can guests crash each other?
> Is there a separate shared memory region per vif, or is it shared globally?

This is an advantage of vhost. The rings and buffers are present in the
guest address space and shared down which keeps them isolated from other
virtual machines preventing security issues like this.

ivshmem is different as the host memory is getting shared everywhere. Again,
this could be resolved. We did some work in dpdk that allowed you to share
individual memory objects to a guest. For example, share a single ring descriptor.
You could then, in theory, partition the memory into zones. If two guests
were in the same zone, they could share the same ring and buffer pool and then
communicate in a zero-copy manner. If they were not, you would then do
a copy in the host which would give you isolation but it wouldn’t be zero-copy.
> 
> 3) What actually happens to packets transmitted over vhost-user?  Are they
> copied once? Twice? Zero-copy?  It looks like a unix domain socket is used for
> signalling.  Is that done per packet or just at setup?

It should be once, from the host memory address space to the guest
memory address space (and vice-versa in the other direction). 

The socket is only used for signaling at setup. I think it is also used when
the guest uses a virtio-net driver (interrupt driven) but not for virtio-pmd driver
(poll mode).
> 
> Please excuse me if this stuff is answered somewhere else already, I can't
> seem to find it.  Thanks in advance for the response, this will help me
> message the feature to users on our end.

There is some documentation in the dpdk repo.

http://www.dpdk.org/doc/guides/prog_guide/vhost_lib.html
http://www.dpdk.org/doc/api/rte__virtio__net_8h.html

> 
> Ethan
> _______________________________________________
> dev mailing list
> dev at openvswitch.org
> http://openvswitch.org/mailman/listinfo/dev


More information about the dev mailing list