[ovs-git] [openvswitch/ovs] 78c948: lib/match: Do not format undefined fields.

GitHub noreply at github.com
Mon Oct 6 22:34:58 UTC 2014


  Branch: refs/heads/master
  Home:   https://github.com/openvswitch/ovs
  Commit: 78c9486d863bf78b6447e104356fb133fc43f400
      https://github.com/openvswitch/ovs/commit/78c9486d863bf78b6447e104356fb133fc43f400
  Author: Jarno Rajahalme <jrajahalme at nicira.com>
  Date:   2014-10-06 (Mon, 06 Oct 2014)

  Changed paths:
    M lib/flow.c
    M lib/flow.h
    M lib/match.c
    M ofproto/ofproto-dpif-upcall.c
    M tests/dpif-netdev.at
    M tests/ofp-print.at
    M tests/ofproto-dpif.at
    M tests/ofproto.at
    M tests/vlan-splinters.at

  Log Message:
  -----------
  lib/match: Do not format undefined fields.

Add function flow_wildcards_init_for_packet() that can be used to set
sensible wildcards when megaflows are disabled.  Before this, we set
all the mask bits to ones, which caused printing tunnel, mpls, and/or
transport port fields even for packets for which it makes no sense.

This has the side effect of generating different megaflow masks for
different packet types, so there will be more than one kind of mask in
the datapath classifier.  This should not make practical difference,
as megaflows should not be disabled when performance is important.

Signed-off-by: Jarno Rajahalme <jrajahalme at nicira.com>
Acked-by: Ben Pfaff <blp at nicira.com>


  Commit: 60df616ff6a815920048e0ebb5d2990aa0054aa2
      https://github.com/openvswitch/ovs/commit/60df616ff6a815920048e0ebb5d2990aa0054aa2
  Author: Jarno Rajahalme <jrajahalme at nicira.com>
  Date:   2014-10-06 (Mon, 06 Oct 2014)

  Changed paths:
    M lib/meta-flow.c

  Log Message:
  -----------
  lib/meta-flow: Index correct MPLS lse in mf_is_all_wild().

Should index the first lse for all parts of the lse (label, TC, BOS).

Signed-off-by: Jarno Rajahalme <jrajahalme at nicira.com>
Acked-by: Ben Pfaff <blp at nicira.com>


  Commit: 22d38fca74ca5be26dda7e8ede02ba7f33170222
      https://github.com/openvswitch/ovs/commit/22d38fca74ca5be26dda7e8ede02ba7f33170222
  Author: Jarno Rajahalme <jrajahalme at nicira.com>
  Date:   2014-10-06 (Mon, 06 Oct 2014)

  Changed paths:
    M lib/flow.c
    M lib/odp-util.c
    M ofproto/ofproto-dpif-xlate.c
    M tests/ofproto-dpif.at

  Log Message:
  -----------
  lib: Fix MPLS masking.

Previously we masked labels not present in the incoming packet.

Signed-off-by: Jarno Rajahalme <jrajahalme at nicira.com>
Acked-by: Ben Pfaff <blp at nicira.com>


  Commit: ee58b46960f3a922d5b4d426afc62cfa177819b4
      https://github.com/openvswitch/ovs/commit/ee58b46960f3a922d5b4d426afc62cfa177819b4
  Author: Jarno Rajahalme <jrajahalme at nicira.com>
  Date:   2014-10-06 (Mon, 06 Oct 2014)

  Changed paths:
    M lib/cmap.c
    M lib/cmap.h

  Log Message:
  -----------
  lib/cmap: Return number of nodes from functions modifying the cmap.

We already update the count field as the last step of these functions,
so returning the current count is very cheap.  Callers that care about
the count become a bit more efficient, as they avoid extra
non-inlineable function call.

Signed-off-by: Jarno Rajahalme <jrajahalme at nicira.com>
Acked-by: Ben Pfaff <blp at nicira.com>


  Commit: 44a48152d82b820f0e9d090c31d89ab8d7f9c6bf
      https://github.com/openvswitch/ovs/commit/44a48152d82b820f0e9d090c31d89ab8d7f9c6bf
  Author: Jarno Rajahalme <jrajahalme at nicira.com>
  Date:   2014-10-06 (Mon, 06 Oct 2014)

  Changed paths:
    M tests/test-cmap.c

  Log Message:
  -----------
  tests/test-cmap: Balance benchmarks between cmap and hmap.

The test cases have been carefully crafted so that we do the same
amount of "overhead" operations in each case.  Earlier, with no
mutations, the number of random number generations was different for
hmap and cmap test cases.  hmap test was also missing an ignore() call.
Now the numbers look like this:

$ tests/ovstest test-cmap benchmark 2000000 8 0
Benchmarking with n=2000000, 8 threads, 0.00% mutations:
cmap insert:    597 ms
cmap iterate:    65 ms
cmap search:    299 ms
cmap destroy:   251 ms

hmap insert:    243 ms
hmap iterate:   201 ms
hmap search:    299 ms
hmap destroy:   202 ms

So it seems search on cmap can be as fast as on hmap in the
single-threaded case.

Signed-off-by: Jarno Rajahalme <jrajahalme at nicira.com>
Acked-by: Ben Pfaff <blp at nicira.com>


  Commit: 6b3b75b7b2dd41a23c3ca3311a6b0f8a7d84f20b
      https://github.com/openvswitch/ovs/commit/6b3b75b7b2dd41a23c3ca3311a6b0f8a7d84f20b
  Author: Jarno Rajahalme <jrajahalme at nicira.com>
  Date:   2014-10-06 (Mon, 06 Oct 2014)

  Changed paths:
    M lib/cmap.c

  Log Message:
  -----------
  lib/cmap: More efficient cmap_find().

    These makes cmap_find 10% faster on GCC 4.7 (-O2 -g).

Signed-off-by: Jarno Rajahalme <jrajahalme at nicira.com>
Acked-by: Ben Pfaff <blp at nicira.com>


  Commit: 5c416811ee851961052feb5330c6fd2227b314a8
      https://github.com/openvswitch/ovs/commit/5c416811ee851961052feb5330c6fd2227b314a8
  Author: Jarno Rajahalme <jrajahalme at nicira.com>
  Date:   2014-10-06 (Mon, 06 Oct 2014)

  Changed paths:
    M lib/cmap.c

  Log Message:
  -----------
  lib/cmap: Use non-atomic access to hash.

We use the 'counter' as a "lock" providing acquire-release
semantics.  Therefore we can use normal, non-atomic access to the
memory accesses between the atomic accesses to 'counter'.  The
cmap_node.next needs to be RCU, so that can not be changed.

For the writer this is straightforward, as we first acquire-read the
counter and after all the changes we release-store the counter.  For
the reader this is a bit more complex, as we need to make sure the
last counter read is not reordered with the preceding read operations
on the bucket contents.

Also rearrange code to benefit from the fact that hash values are
unique in any bucket.

This patch seems to make cmap_insert() a bit faster.

Signed-off-by: Jarno Rajahalme <jrajahalme at nicira.com>
Acked-by: Ben Pfaff <blp at nicira.com>


  Commit: 55847abee8fdb45f06ef94764d766ee81abb9ac4
      https://github.com/openvswitch/ovs/commit/55847abee8fdb45f06ef94764d766ee81abb9ac4
  Author: Jarno Rajahalme <jrajahalme at nicira.com>
  Date:   2014-10-06 (Mon, 06 Oct 2014)

  Changed paths:
    M lib/classifier.c
    M lib/cmap.c
    M lib/cmap.h
    M lib/dpif-netdev.c

  Log Message:
  -----------
  lib/cmap: split up cmap_find().

This makes the following patch easier and cleans up the code.

Explicit "inline" keywords seem necessary to prevent performance
regression on cmap_find() with GCC 4.7 -O2.

Signed-off-by: Jarno Rajahalme <jrajahalme at nicira.com>
Acked-by: Ben Pfaff <blp at nicira.com>


  Commit: 52a524eb20462dd20d2e4e38d0fe97c07de040a7
      https://github.com/openvswitch/ovs/commit/52a524eb20462dd20d2e4e38d0fe97c07de040a7
  Author: Jarno Rajahalme <jrajahalme at nicira.com>
  Date:   2014-10-06 (Mon, 06 Oct 2014)

  Changed paths:
    M lib/bitmap.h
    M lib/classifier.c
    M lib/classifier.h
    M lib/cmap.c
    M lib/cmap.h
    M lib/dpif-netdev.c
    M tests/test-cmap.c

  Log Message:
  -----------
  lib/cmap: cmap_find_batch().

Batching the cmap find improves the memory behavior with large cmaps
and can make searches twice as fast:

$ tests/ovstest test-cmap benchmark 2000000 8 0.1 16
Benchmarking with n=2000000, 8 threads, 0.10% mutations, batch size 16:
cmap insert:    533 ms
cmap iterate:    57 ms
batch search:   146 ms
cmap destroy:   233 ms

cmap insert:    552 ms
cmap iterate:    56 ms
cmap search:    299 ms
cmap destroy:   229 ms

hmap insert:    222 ms
hmap iterate:   198 ms
hmap search:   2061 ms
hmap destroy:   209 ms

Batch size 1 has small performance penalty, but all other batch sizes
are faster than non-batched cmap_find().  The batch size 16 was
experimentally found better than 8 or 32, so now
classifier_lookup_miniflow_batch() performs the cmap find operations
in batches of 16.

Signed-off-by: Jarno Rajahalme <jrajahalme at nicira.com>
Acked-by: Ben Pfaff <blp at nicira.com>


Compare: https://github.com/openvswitch/ovs/compare/7f8350b09dc2...52a524eb2046


More information about the git mailing list