[ovs-dev] Bucket statistics

Marco Canini marco.canini at acm.org
Thu Jan 23 16:01:24 UTC 2014


Ben,

I do see that a struct bucket_counter *bucket_stats exists in the code, but
I believe no code is updating those counters. I'm running OVS from git,
revision 7868fbc.

grep -rn bucket_stats *

include/openflow/openflow-1.1.h:701:    /* struct ofp11_bucket_counter
bucket_stats[0]; */
include/openflow/openflow-1.3.h:375:    /* struct ofp11_bucket_counter
bucket_stats[0]; */
lib/ofp-print.c:2416:            if (gs.bucket_stats[bucket_i].packet_count
!= UINT64_MAX) {
lib/ofp-print.c:2418:                ds_put_format(s,
"packet_count=%"PRIu64",", gs.bucket_stats[bucket_i].packet_count);
lib/ofp-print.c:2419:                ds_put_format(s,
"byte_count=%"PRIu64"", gs.bucket_stats[bucket_i].byte_count);
lib/ofp-print.c:2423:        free(gs.bucket_stats);
lib/ofp-util.c:5861:        const struct bucket_counter *obc =
&ogs->bucket_stats[i];
lib/ofp-util.c:5992: * in 'gs'.  Assigns freshly allocated memory to
gs->bucket_stats for the
lib/ofp-util.c:6014:    gs->bucket_stats = NULL;
lib/ofp-util.c:6071:    gs->bucket_stats = xmalloc(gs->n_buckets * sizeof
*gs->bucket_stats);
lib/ofp-util.c:6073:        gs->bucket_stats[i].packet_count =
ntohll(obc[i].packet_count);
lib/ofp-util.c:6074:        gs->bucket_stats[i].byte_count =
ntohll(obc[i].byte_count);
lib/ofp-util.h:991:    struct bucket_counter *bucket_stats;
ofproto/ofproto.c:5376:    ogs.bucket_stats = xmalloc(group->n_buckets *
sizeof *ogs.bucket_stats);
ofproto/ofproto.c:5388:        memset(ogs.bucket_stats, 0xff,
ofproto/ofproto.c:5389:               ogs.n_buckets * sizeof
*ogs.bucket_stats);
ofproto/ofproto.c:5397:    free(ogs.bucket_stats);
ofproto/ofproto-dpif.c:108:    struct bucket_counter *bucket_stats
OVS_GUARDED;  /* Bucket statistics. */
ofproto/ofproto-dpif.c:3198:    if (!group->bucket_stats) {
ofproto/ofproto-dpif.c:3199:        group->bucket_stats =
xcalloc(group->up.n_buckets,
ofproto/ofproto-dpif.c:3200:                                      sizeof
*group->bucket_stats);
ofproto/ofproto-dpif.c:3202:        memset(group->bucket_stats, 0,
group->up.n_buckets *
ofproto/ofproto-dpif.c:3203:               sizeof *group->bucket_stats);
ofproto/ofproto-dpif.c:3222:    free(group->bucket_stats);
ofproto/ofproto-dpif.c:3223:    group->bucket_stats = NULL;
ofproto/ofproto-dpif.c:3260:    memcpy(ogs->bucket_stats,
group->bucket_stats,
ofproto/ofproto-dpif.c:3261:           group->up.n_buckets * sizeof
*group->bucket_stats);

I can see that there is this function:
void
rule_dpif_credit_stats(struct rule_dpif *rule,
                       const struct dpif_flow_stats *stats)
{
    ovs_mutex_lock(&rule->stats_mutex);
    rule->packet_count += stats->n_packets;
    rule->byte_count += stats->n_bytes;
    rule->up.used = MAX(rule->up.used, stats->used);
    ovs_mutex_unlock(&rule->stats_mutex);
}
which updates counters for flow table entries.

I believe a similar function is needed to update group table entries and
bucket_stats.

Here is a detailed analysis of what I observe. Notice that the reported
pkt_cnt for OVS is 0, while with the CPqD 1.3 user switch it is 10, as
expected.

Assume dpctl from CPqD OpenFlow 1.3 switch implementation

mn --arp --mac --controller=remote --topo=single,3 --switch=ovsk

*** Creating network
*** Adding controller
Unable to contact the remote controller at 127.0.0.1:6633
*** Adding hosts:
h1 h2 h3
*** Adding switches:
s1
*** Adding links:
(h1, s1) (h2, s1) (h3, s1)
*** Configuring hosts
h1 h2 h3
*** Starting controller
*** Starting 1 switches
s1
*** Starting CLI:

mininet> s1 dpctl unix:/var/run/openvswitch/s1.mgmt group-mod
cmd=add,type=ff,group=1 weight=0,port=2,group=any output=2
 weight=0,port=3,group=any output=3
[...]
OK.

mininet> s1 dpctl unix:/var/run/openvswitch/s1.mgmt flow-mod
cmd=add,table=0,prio=65535 in_port=1,eth_dst=00:00:00:00:00:02 apply:group=1
[...]
OK.

mininet> s1 dpctl unix:/var/run/openvswitch/s1.mgmt flow-mod
cmd=add,table=0,prio=65535 in_port=2,eth_dst=00:00:00:00:00:01
apply:output=1
[...]
OK.

mininet> h1 ping -c 10 h2
PING 10.0.0.2 (10.0.0.2) 56(84) bytes of data.
64 bytes from 10.0.0.2: icmp_req=1 ttl=64 time=0.247 ms
64 bytes from 10.0.0.2: icmp_req=2 ttl=64 time=0.059 ms
64 bytes from 10.0.0.2: icmp_req=3 ttl=64 time=0.061 ms
64 bytes from 10.0.0.2: icmp_req=4 ttl=64 time=0.068 ms
64 bytes from 10.0.0.2: icmp_req=5 ttl=64 time=0.060 ms
64 bytes from 10.0.0.2: icmp_req=6 ttl=64 time=0.100 ms
64 bytes from 10.0.0.2: icmp_req=7 ttl=64 time=0.051 ms
64 bytes from 10.0.0.2: icmp_req=8 ttl=64 time=0.043 ms
64 bytes from 10.0.0.2: icmp_req=9 ttl=64 time=0.053 ms
64 bytes from 10.0.0.2: icmp_req=10 ttl=64 time=0.057 ms

--- 10.0.0.2 ping statistics ---
10 packets transmitted, 10 received, 0% packet loss, time 8997ms
rtt min/avg/max/mdev = 0.043/0.079/0.247/0.058 ms

mininet> s1 dpctl unix:/var/run/openvswitch/s1.mgmt stats-group

SENDING:
stat_req{type="grp", flags="0x0", group="all"}

Jan 23 16:40:30|00001|vconn|WARN|unix:/var/run/openvswitch/s1.mgmt:
extra-long hello:
00000000  04 00 00 10 00 00 00 04-00 01 00 08 00 00 00 10 |................|


RECEIVED:
stat_repl{type="grp", flags="0x0", stats=[{group="1", ref_cnt="1",
pkt_cnt="0", byte_cnt="0", cntrs=[{pkt_cnt="0", byte_cnt="0"},
{pkt_cnt="0", byte_cnt="0"}]}]}


Now run with CPqD OpenFlow 1.3:

mn --arp --mac --controller=remote --topo=single,3 --switch=user
*** Creating network
*** Adding controller
Unable to contact the remote controller at 127.0.0.1:6633
*** Adding hosts:
h1 h2 h3
*** Adding switches:
s1
*** Adding links:
(h1, s1) (h2, s1) (h3, s1)
*** Configuring hosts
h1 h2 h3
*** Starting controller
*** Starting 1 switches
s1
*** Starting CLI:
mininet> s1 dpctl unix:/tmp/s1 group-mod cmd=add,type=ff,group=1
weight=0,port=2,group=any output=2  weight=0,port=3,group=any output=3
[...]
OK.

mininet> s1 dpctl unix:/tmp/s1 flow-mod cmd=add,table=0,prio=65535
in_port=1,eth_dst=00:00:00:00:00:02 apply:group=1
[...]
OK.

mininet> s1 dpctl unix:/tmp/s1 flow-mod cmd=add,table=0,prio=65535
in_port=2,eth_dst=00:00:00:00:00:01 apply:output=1
[...]
OK.

mininet> h1 ping -c 10 h2
PING 10.0.0.2 (10.0.0.2) 56(84) bytes of data.
64 bytes from 10.0.0.2: icmp_req=1 ttl=64 time=1.94 ms
64 bytes from 10.0.0.2: icmp_req=2 ttl=64 time=1.43 ms
64 bytes from 10.0.0.2: icmp_req=3 ttl=64 time=1.38 ms
64 bytes from 10.0.0.2: icmp_req=4 ttl=64 time=1.65 ms
64 bytes from 10.0.0.2: icmp_req=5 ttl=64 time=2.08 ms
64 bytes from 10.0.0.2: icmp_req=6 ttl=64 time=1.51 ms
64 bytes from 10.0.0.2: icmp_req=7 ttl=64 time=1.30 ms
64 bytes from 10.0.0.2: icmp_req=8 ttl=64 time=1.66 ms
64 bytes from 10.0.0.2: icmp_req=9 ttl=64 time=0.622 ms
64 bytes from 10.0.0.2: icmp_req=10 ttl=64 time=0.638 ms

--- 10.0.0.2 ping statistics ---
10 packets transmitted, 10 received, 0% packet loss, time 9020ms
rtt min/avg/max/mdev = 0.622/1.424/2.083/0.461 ms

mininet> s1 dpctl unix:/tmp/s1 stats-group

SENDING:
stat_req{type="grp", flags="0x0", group="all"}


RECEIVED:
stat_repl{type="grp", flags="0x0", stats=[{group="1", ref_cnt="1",
pkt_cnt="10", byte_cnt="980", cntrs=[{pkt_cnt="10", byte_cnt="980"},
{pkt_cnt="0", byte_cnt="0"}]}]}


Hope this clarifies,


On Wed, Jan 22, 2014 at 5:13 PM, Ben Pfaff <blp at nicira.com> wrote:

> On Mon, Jan 20, 2014 at 07:48:22AM +0100, Marco Canini wrote:
> > I am interested in reading the bucket statistics while running with
> support
> > for OpenFlow 1.3 enabled.
> > Currently when I read the bucket statistics I always see packet_count =
> 0.
> > I've gone through the code and, while I see that a bucket_stats structure
> > exists, I am unable to understand where the stats are getting
> incremented.
> > Is there support for bucket stats?
>
> Without looking at the code, I'm pretty sure there's support.  What have
> you tried?
>
-------------- next part --------------
An HTML attachment was scrubbed...
URL: <http://mail.openvswitch.org/pipermail/ovs-dev/attachments/20140123/fac5e43d/attachment-0003.html>


More information about the dev mailing list