[ovs-discuss] vhostuser-client for containers

Subrata Nath subratanath01 at googlemail.com
Tue Oct 23 13:44:25 UTC 2018


Hi,

1)I am working with DPDK 18.02 and OVS 2.10 for containers(virtio-user
interface). it’s found that vhostuser is deprecated there and it’s
recommended to use “vhostuser-client” type.  Error log is shown below. I
couldn’t find any documentation how to use this “vhostuser-client” with the
testpmd inside the container. All the documentation for running testpmd,
inside the container, is with socket created by OVS for vhostuser interface
(
“--vdev=net_virtio_user2,mac=00:00:00:00:00:02,path=/usr/local/var/run/openvswitch/vhost_user2”)
only. Documentation of “vhostuser-client” is  with QEMU only and no
documentation is available for the containers.

*2018-10-23T06:16:25.235Z|00002|dpif(revalidator6)|WARN|netdev at ovs-netdev:
failed to put[modify] (Invalid argument)
ufid:d060efe9-3f5c-4f9c-a5c1-70f934148956
skb_priority(0/0),skb_mark(0/0),ct_state(0/0),ct_zone(0/0),ct_mark(0/0),ct_label(0/0),recirc_id(0),dp_hash(0/0),in_port(3),packet_type(ns=0,id=0),eth(src=ec:3e:f7:07:2b:00,dst=01:00:5e:00:00:01),eth_type(0x8100),vlan(vid=1600,pcp=6/0x0),encap(eth_type(0x0800),ipv4(src=10.40.64.129/0.0.0.0,dst=224.0.0.1/0.0.0.0,proto=2/0,tos=0xc0/0,ttl=1/0,frag=no)
<http://10.40.64.129/0.0.0.0,dst=224.0.0.1/0.0.0.0,proto=2/0,tos=0xc0/0,ttl=1/0,frag=no)>),
actions:userspace(pid=0,slow_path(match))*

*2018-10-23T06:16:31.332Z|00003|dpif_netdev(revalidator6)|ERR|internal
error parsing flow key
skb_priority(0),skb_mark(0),ct_state(0),ct_zone(0),ct_mark(0),ct_label(0),recirc_id(0),dp_hash(0),in_port(3),packet_type(ns=0,id=0),eth(src=90:e2:ba:b3:6c:ac,dst=01:00:5e:4d:7c:d5),eth_type(0x8100),vlan(vid=1600,pcp=0),encap(eth_type(0x0800),ipv4(src=10.40.64.135,dst=239.77.124.213,proto=2,tos=0xc0,ttl=1,frag=no))*

*2018-10-23T06:16:31.332Z|00004|dpif_netdev(revalidator6)|ERR|internal
error parsing flow key
skb_priority(0),skb_mark(0),ct_state(0),ct_zone(0),ct_mark(0),ct_label(0),recirc_id(0),dp_hash(0),in_port(3),packet_type(ns=0,id=0),eth(src=ec:3e:f7:07:2b:00,dst=01:00:5e:00:00:01),eth_type(0x8100),vlan(vid=1600,pcp=6),encap(eth_type(0x0800),ipv4(src=10.40.64.129,dst=224.0.0.1,proto=2,tos=0xc0,ttl=1,frag=no))*

*2018-10-23T06:16:31.332Z|00005|dpif(revalidator6)|WARN|netdev at ovs-netdev:
failed to put[modify] (Invalid argument)
ufid:dd38c0b0-e35f-4710-a2a3-d62279dada2f
skb_priority(0/0),skb_mark(0/0),ct_state(0/0),ct_zone(0/0),ct_mark(0/0),ct_label(0/0),recirc_id(0),dp_hash(0/0),in_port(3),packet_type(ns=0,id=0),eth(src=90:e2:ba:b3:6c:ac,dst=01:00:5e:4d:7c:d5),eth_type(0x8100),vlan(vid=1600,pcp=0/0x0),encap(eth_type(0x0800),ipv4(src=10.40.64.135/0.0.0.0,dst=239.77.124.213/0.0.0.0,proto=2/0,tos=0xc0/0,ttl=1/0,frag=no)
<http://10.40.64.135/0.0.0.0,dst=239.77.124.213/0.0.0.0,proto=2/0,tos=0xc0/0,ttl=1/0,frag=no)>),
actions:userspace(pid=0,slow_path(match))*

*2018-10-23T06:16:31.332Z|00006|dpif(revalidator6)|WARN|netdev at ovs-netdev:
failed to put[modify] (Invalid argument)
ufid:d060efe9-3f5c-4f9c-a5c1-70f934148956
skb_priority(0/0),skb_mark(0/0),ct_state(0/0),ct_zone(0/0),ct_mark(0/0),ct_label(0/0),recirc_id(0),dp_hash(0/0),in_port(3),packet_type(ns=0,id=0),eth(src=ec:3e:f7:07:2b:00,dst=01:00:5e:00:00:01),eth_type(0x8100),vlan(vid=1600,pcp=6/0x0),encap(eth_type(0x0800),ipv4(src=10.40.64.129/0.0.0.0,dst=224.0.0.1/0.0.0.0,proto=2/0,tos=0xc0/0,ttl=1/0,frag=no)
<http://10.40.64.129/0.0.0.0,dst=224.0.0.1/0.0.0.0,proto=2/0,tos=0xc0/0,ttl=1/0,frag=no)>),
actions:userspace(pid=0,slow_path(match))*

*2018-10-23T07:12:16.802Z|00096|dpdk|INFO|VHOST_CONFIG: vhost-user server:
socket created, fd: 57*

*2018-10-23T07:12:16.803Z|00097|netdev_dpdk|INFO|Socket
/usr/local/var/run/openvswitch/vhost_user2 created for vhost-user port
vhost_user2*

*2018-10-23T07:12:16.803Z|00098|dpdk|INFO|VHOST_CONFIG: bind to
/usr/local/var/run/openvswitch/vhost_user2*
*2018-10-23T07:12:16.803Z|00099|netdev_dpdk|WARN|dpdkvhostuser ports are
considered deprecated;  please migrate to dpdkvhostuserclient ports.*

2)  We are trying our if the OVS can be used as load balancer. As our
backend microservices container will be spread across multiple servers of
K8s cluster, is it possible to have a single DPDK OVS container in a single
server only instead of running it in all the servers. As per the
documentation, testpmd is running with the EAL option
 “--vdev=net_virtio_user2,mac=00:00:00:00:00:02,path=/usr/local/var/run/openvswitch/vhost_user2”.
Hence it’s expected the DPDK-OVS created vhost-user socket is in the same
server. Is it possible, to access this socket over IP as the DPDK-OVS is
different server? With this, we can have a single OVS as Load balancer.

Regards,
Subrata
-------------- next part --------------
An HTML attachment was scrubbed...
URL: <http://mail.openvswitch.org/pipermail/ovs-discuss/attachments/20181023/067de1fb/attachment-0001.html>


More information about the discuss mailing list