[ovs-dev] PATCH [1/1] High speed PMD physical NIC queue size

Polehn, Mike A mike.a.polehn at intel.com
Thu Jun 19 22:07:24 UTC 2014


There is an improvement in 2544 zero loss measurements, but it takes another patch to actually be able to get a reasonable measurement with standard test equipment.

Should I redo it with the new enum change. I am not sure of using an enum for a single constant.

Mike Polehn

-----Original Message-----
From: Ethan Jackson [mailto:ethan at nicira.com] 
Sent: Thursday, June 19, 2014 2:54 PM
To: Polehn, Mike A
Cc: dev at openvswitch.org
Subject: Re: [ovs-dev] PATCH [1/1] High speed PMD physical NIC queue size

Also another question.  Does this patch result in a measurable improvement in any benchmarks?  If so, would you please note it in the commit message?  If not, I'm not sure we should merge this yet.

Ethan

On Thu, Jun 19, 2014 at 2:45 PM, Polehn, Mike A <mike.a.polehn at intel.com> wrote:
> I coming from an earlier version that had the arguments first setup 
> was as a number, then used in several places including the tx cache 
> size and didn't catch that new 3rd definition were used as I moved the patch forward to try on the latest git updates before sending.
>
> There is also a queue sizing formula in the comment that is not obvious.
>
>  Mike Polehn
>
> -----Original Message-----
> From: Ethan Jackson [mailto:ethan at nicira.com]
> Sent: Thursday, June 19, 2014 10:21 AM
> To: Polehn, Mike A
> Cc: dev at openvswitch.org
> Subject: Re: [ovs-dev] PATCH [1/1] High speed PMD physical NIC queue 
> size
>
> One question: why not just increase MAX_RX_QUEUE_LEN and MAX_TX_QUEUE_LEN instead of creating new #defines?
>
> Just a thought.  I'd like Pravin to review this as I don't know this code as well as him.
>
> Ethan
>
> On Thu, Jun 19, 2014 at 9:59 AM, Polehn, Mike A <mike.a.polehn at intel.com> wrote:
>> Large TX and RX queues are needed for high speed 10 GbE physical NICS.
>>
>> Signed-off-by: Mike A. Polehn <mike.a.polehn at intel.com>
>>
>> diff --git a/lib/netdev-dpdk.c b/lib/netdev-dpdk.c index
>> fbdb6b3..d1bcc73 100644
>> --- a/lib/netdev-dpdk.c
>> +++ b/lib/netdev-dpdk.c
>> @@ -70,6 +70,9 @@ static struct vlog_rate_limit rl = 
>> VLOG_RATE_LIMIT_INIT(5, 20);
>>
>>  #define NON_PMD_THREAD_TX_QUEUE 0
>>
>> +#define NIC_PORT_RX_Q_SIZE 2048  /* Size of Physical NIC RX Queue 
>> +(n*32<4096)*/ #define NIC_PORT_TX_Q_SIZE 2048  /* Size of Physical 
>> +NIC TX Queue (n*32<4096)*/
>> +
>>  /* TODO: Needs per NIC value for these constants. */  #define 
>> RX_PTHRESH 32 /* Default values of RX prefetch threshold reg. */ 
>> #define RX_HTHRESH 32 /* Default values of RX host threshold reg. */ 
>> @@ -369,7 +372,7 @@ dpdk_eth_dev_init(struct netdev_dpdk *dev) OVS_REQUIRES(dpdk_mutex)
>>      }
>>
>>      for (i = 0; i < NR_QUEUE; i++) {
>> -        diag = rte_eth_tx_queue_setup(dev->port_id, i, MAX_TX_QUEUE_LEN,
>> +        diag = rte_eth_tx_queue_setup(dev->port_id, i, 
>> + NIC_PORT_TX_Q_SIZE,
>>                                        dev->socket_id, &tx_conf);
>>          if (diag) {
>>              VLOG_ERR("eth dev tx queue setup error %d",diag); @@
>> -378,7 +381,7 @@ dpdk_eth_dev_init(struct netdev_dpdk *dev) OVS_REQUIRES(dpdk_mutex)
>>      }
>>
>>      for (i = 0; i < NR_QUEUE; i++) {
>> -        diag = rte_eth_rx_queue_setup(dev->port_id, i, MAX_RX_QUEUE_LEN,
>> +        diag = rte_eth_rx_queue_setup(dev->port_id, i, 
>> + NIC_PORT_RX_Q_SIZE,
>>                                        dev->socket_id,
>>                                        &rx_conf, dev->dpdk_mp->mp);
>>          if (diag) {
>> _______________________________________________
>> dev mailing list
>> dev at openvswitch.org
>> http://openvswitch.org/mailman/listinfo/dev


More information about the dev mailing list