[ovs-discuss] the lock in dpdk_vhost_send is really need?

钢锁0310 liw at dtdream.com
Fri May 29 09:42:56 UTC 2015


static void
__netdev_dpdk_vhost_send(struct netdev *netdev, struct dp_packet **pkts,
                         int cnt, bool may_steal)
{
    rte_spinlock_lock(&vhost_dev->vhost_tx_lock);

    do {
        tx_pkts = rte_vhost_enqueue_burst(virtio_dev, VIRTIO_RXQ,
                                          cur_pkts, cnt);
    } while (cnt);
    rte_spinlock_unlock(&vhost_dev->vhost_tx_lock);
}
In 
rte_vhost_enqueue_burst, there is a reserved for vring, Does really need the lock with red color?  
if really need, protect what?
thanks for explaining  
static inline uint32_t __attribute__((always_inline))
virtio_dev_rx(struct virtio_net *dev, uint16_t queue_id,
 struct rte_mbuf **pkts, uint32_t count)
{
 /*
  * As many data cores may want access to available buffers,
  * they need to be reserved.
  */
 do {
  res_end_idx = res_base_idx + count;
  /* vq->last_used_idx_res is atomically updated. */
  /* TODO: Allow to disable cmpset if no concurrency in application. */
  success = rte_atomic16_cmpset(&vq->last_used_idx_res,
    res_base_idx, res_end_idx);
 } while (unlikely(success == 0));
}
*********************RTFSC*********************
-------------- next part --------------
An HTML attachment was scrubbed...
URL: <http://openvswitch.org/pipermail/ovs-discuss/attachments/20150529/83b80a85/attachment-0002.html>


More information about the discuss mailing list