[ovs-dev] [patch v10 2/2] DSCP marking on packets egressing VIF interface

Mickey Spiegel mickeys.dev at gmail.com
Tue Aug 30 21:35:42 UTC 2016


On Mon, Aug 29, 2016 at 4:34 AM, <bschanmu at redhat.com> wrote:

> ovn-northd sets 'ip.dscp' to the DSCP value
>
> IMO the big question is still whether the first release of DSCP marking
should be
based only on ingress port, as this patch currently suggests, or whether it
should
allow DSCP marking based on arbitrary match criteria. I will send out a
separate
message to generate discussion about multiple features (ACLs, QoS, SFC) with
arbitrary match criteria.

Assuming that we reach a conclusion that the first release of DSCP marking
should
be based only on ingress port, a couple of comments inline below.


> Signed-off-by: Babu Shanmugam <bschanmu at redhat.com>
> ---
>  ovn/lib/logical-fields.c    |  2 +-
>  ovn/northd/ovn-northd.8.xml | 45 ++++++++++++++++++++--------
>  ovn/northd/ovn-northd.c     | 72 +++++++++++++++++++++++++++---
> --------------
>  ovn/ovn-nb.xml              |  6 ++++
>  ovn/ovn-sb.xml              |  5 ++++
>  tests/ovn.at                | 73 ++++++++++++++++++++++++++++++
> +++++++++++++++
>  6 files changed, 161 insertions(+), 42 deletions(-)
>
> diff --git a/ovn/lib/logical-fields.c b/ovn/lib/logical-fields.c
> index 6dbb4ae..068c000 100644
> --- a/ovn/lib/logical-fields.c
> +++ b/ovn/lib/logical-fields.c
> @@ -134,7 +134,7 @@ ovn_init_symtab(struct shash *symtab)
>      expr_symtab_add_predicate(symtab, "ip6", "eth.type == 0x86dd");
>      expr_symtab_add_predicate(symtab, "ip", "ip4 || ip6");
>      expr_symtab_add_field(symtab, "ip.proto", MFF_IP_PROTO, "ip", true);
> -    expr_symtab_add_field(symtab, "ip.dscp", MFF_IP_DSCP, "ip", false);
> +    expr_symtab_add_field(symtab, "ip.dscp", MFF_IP_DSCP_SHIFTED, "ip",
> false);
>      expr_symtab_add_field(symtab, "ip.ecn", MFF_IP_ECN, "ip", false);
>      expr_symtab_add_field(symtab, "ip.ttl", MFF_IP_TTL, "ip", false);
>
> diff --git a/ovn/northd/ovn-northd.8.xml b/ovn/northd/ovn-northd.8.xml
> index 3448370..bf96f0e 100644
> --- a/ovn/northd/ovn-northd.8.xml
> +++ b/ovn/northd/ovn-northd.8.xml
> @@ -140,7 +140,7 @@
>        be dropped.
>      </p>
>
> -    <h3>Ingress Table 1: Ingress Port Security - IP</h3>
> +    <h3>Ingress Table 1: Ingress Port DSCP</h3>
>
>      <p>
>        Ingress table 1 contains these logical flows:
> @@ -148,6 +148,25 @@
>
>      <ul>
>        <li>
> +        One priority 100 flow for every port having DSCP setting that sets
> +        dscp header in the IP packets egressing the ports and ingressing
> the
> +        switch.
>

This is hard to follow. Looking at the flows you are adding in ovn-northd.c,
it seems pretty simple. I don't see anything that I would call "egressing".
How about:
+        For every port with a DSCP setting, one priority-100 flow
         that matches the <code>inport</code> on the corresponding
         switch and sets DSCP.


> +      </li>
> +
> +      <li>
> +        One priority-0 fallback flow that matches all packets and
> advances to
> +        the next table.
> +      </li>
> +    </ul>
> +
> +    <h3>Ingress Table 2: Ingress Port Security - IP</h3>
> +
> +    <p>
> +      Ingress table 2 contains these logical flows:
> +    </p>
> +
> +    <ul>
> +      <li>
>          <p>
>            For each element in the port security set having one or more
> IPv4 or
>            IPv6 addresses (or both),
> @@ -195,10 +214,10 @@
>        </li>
>      </ul>
>
> -    <h3>Ingress Table 2: Ingress Port Security - Neighbor discovery</h3>
> +    <h3>Ingress Table 3: Ingress Port Security - Neighbor discovery</h3>
>
>      <p>
> -      Ingress table 2 contains these logical flows:
> +      Ingress table 3 contains these logical flows:
>      </p>
>
>      <ul>
> @@ -240,7 +259,7 @@
>        </li>
>      </ul>
>
> -    <h3>Ingress Table 3: <code>from-lport</code> Pre-ACLs</h3>
> +    <h3>Ingress Table 4: <code>from-lport</code> Pre-ACLs</h3>
>
>      <p>
>        This table prepares flows for possible stateful ACL processing in
> @@ -252,7 +271,7 @@
>        before eventually advancing to ingress table <code>ACLs</code>.
>      </p>
>
> -    <h3>Ingress Table 4: Pre-LB</h3>
> +    <h3>Ingress Table 5: Pre-LB</h3>
>
>      <p>
>        This table prepares flows for possible stateful load balancing
> processing
> @@ -268,7 +287,7 @@
>        advancing to ingress table <code>LB</code>.
>      </p>
>
> -    <h3>Ingress Table 5: Pre-stateful</h3>
> +    <h3>Ingress Table 6: Pre-stateful</h3>
>
>      <p>
>        This table prepares flows for all possible stateful processing
> @@ -279,7 +298,7 @@
>        <code>ct_next;</code> action.
>      </p>
>
> -    <h3>Ingress table 6: <code>from-lport</code> ACLs</h3>
> +    <h3>Ingress table 7: <code>from-lport</code> ACLs</h3>
>
>      <p>
>        Logical flows in this table closely reproduce those in the
> @@ -362,7 +381,7 @@
>        </li>
>      </ul>
>
> -    <h3>Ingress Table 7: LB</h3>
> +    <h3>Ingress Table 8: LB</h3>
>
>      <p>
>        It contains a priority-0 flow that simply moves traffic to the next
> @@ -375,7 +394,7 @@
>        connection.)
>      </p>
>
> -    <h3>Ingress Table 8: Stateful</h3>
> +    <h3>Ingress Table 9: Stateful</h3>
>
>      <ul>
>        <li>
> @@ -412,7 +431,7 @@
>        </li>
>      </ul>
>
> -    <h3>Ingress Table 9: ARP/ND responder</h3>
> +    <h3>Ingress Table 10: ARP/ND responder</h3>
>
>      <p>
>        This table implements ARP/ND responder for known IPs.  It contains
> these
> @@ -484,7 +503,7 @@ nd_na {
>        </li>
>      </ul>
>
> -    <h3>Ingress Table 10: DHCP option processing</h3>
> +    <h3>Ingress Table 11: DHCP option processing</h3>
>
>      <p>
>        This table adds the DHCPv4 options to a DHCPv4 packet from the
> @@ -544,7 +563,7 @@ next;
>        </li>
>      </ul>
>
> -    <h3>Ingress Table 11: DHCP responses</h3>
> +    <h3>Ingress Table 12: DHCP responses</h3>
>
>      <p>
>        This table implements DHCP responder for the DHCP replies generated
> by
> @@ -626,7 +645,7 @@ output;
>        </li>
>      </ul>
>
> -    <h3>Ingress Table 12: Destination Lookup</h3>
> +    <h3>Ingress Table 13 Destination Lookup</h3>
>
>      <p>
>        This table implements switching behavior.  It contains these logical
> diff --git a/ovn/northd/ovn-northd.c b/ovn/northd/ovn-northd.c
> index d7d61bf..045a9a4 100644
> --- a/ovn/northd/ovn-northd.c
> +++ b/ovn/northd/ovn-northd.c
> @@ -93,21 +93,22 @@ enum ovn_datapath_type {
>   * form the stage's full name, e.g. S_SWITCH_IN_PORT_SEC_L2,
>   * S_ROUTER_OUT_DELIVERY. */
>  enum ovn_stage {
> -#define PIPELINE_STAGES                                               \
> -    /* Logical switch ingress stages. */                              \
> -    PIPELINE_STAGE(SWITCH, IN,  PORT_SEC_L2,    0, "ls_in_port_sec_l2")
>    \
> -    PIPELINE_STAGE(SWITCH, IN,  PORT_SEC_IP,    1, "ls_in_port_sec_ip")
>    \
> -    PIPELINE_STAGE(SWITCH, IN,  PORT_SEC_ND,    2, "ls_in_port_sec_nd")
>    \
> -    PIPELINE_STAGE(SWITCH, IN,  PRE_ACL,        3, "ls_in_pre_acl")      \
> -    PIPELINE_STAGE(SWITCH, IN,  PRE_LB,         4, "ls_in_pre_lb")
>  \
> -    PIPELINE_STAGE(SWITCH, IN,  PRE_STATEFUL,   5, "ls_in_pre_stateful")
>   \
> -    PIPELINE_STAGE(SWITCH, IN,  ACL,            6, "ls_in_acl")          \
> -    PIPELINE_STAGE(SWITCH, IN,  LB,             7, "ls_in_lb")           \
> -    PIPELINE_STAGE(SWITCH, IN,  STATEFUL,       8, "ls_in_stateful")     \
> -    PIPELINE_STAGE(SWITCH, IN,  ARP_ND_RSP,     9, "ls_in_arp_rsp")      \
> -    PIPELINE_STAGE(SWITCH, IN,  DHCP_OPTIONS,   10, "ls_in_dhcp_options")
> \
> -    PIPELINE_STAGE(SWITCH, IN,  DHCP_RESPONSE,  11,
> "ls_in_dhcp_response") \
> -    PIPELINE_STAGE(SWITCH, IN,  L2_LKUP,        12, "ls_in_l2_lkup")
> \
> +#define PIPELINE_STAGES
>  \
> +    /* Logical switch ingress stages. */
> \
> +    PIPELINE_STAGE(SWITCH, IN,  PORT_SEC_L2,    0, "ls_in_port_sec_l2")
>  \
> +    PIPELINE_STAGE(SWITCH, IN,  PORT_DSCP,      1, "ls_in_port_dscp")
>  \
>

It may be a minor optimization, but since this stage just sets a DSCP value
that is not used anywhere else in the ingress pipeline, my preference would
be to place this stage towards the end of the ingress pipeline rather than
at the beginning. Let the features that drop packets figure that out first,
then apply this to whatever survives.
How about just after ACL?

Mickey


+    PIPELINE_STAGE(SWITCH, IN,  PORT_SEC_IP,    2, "ls_in_port_sec_ip")   \
> +    PIPELINE_STAGE(SWITCH, IN,  PORT_SEC_ND,    3, "ls_in_port_sec_nd")
>  \
> +    PIPELINE_STAGE(SWITCH, IN,  PRE_ACL,        4, "ls_in_pre_acl")
>  \
> +    PIPELINE_STAGE(SWITCH, IN,  PRE_LB,         5, "ls_in_pre_lb")
> \
> +    PIPELINE_STAGE(SWITCH, IN,  PRE_STATEFUL,   6, "ls_in_pre_stateful")
> \
> +    PIPELINE_STAGE(SWITCH, IN,  ACL,            7, "ls_in_acl")
>  \
> +    PIPELINE_STAGE(SWITCH, IN,  LB,             8, "ls_in_lb")
> \
> +    PIPELINE_STAGE(SWITCH, IN,  STATEFUL,       9, "ls_in_stateful")
> \
> +    PIPELINE_STAGE(SWITCH, IN,  ARP_ND_RSP,    10, "ls_in_arp_rsp")
>  \
> +    PIPELINE_STAGE(SWITCH, IN,  DHCP_OPTIONS,  11, "ls_in_dhcp_options")
> \
> +    PIPELINE_STAGE(SWITCH, IN,  DHCP_RESPONSE, 12, "ls_in_dhcp_response")
> \
> +    PIPELINE_STAGE(SWITCH, IN,  L2_LKUP,       13, "ls_in_l2_lkup")
>  \
>                                                                        \
>      /* Logical switch egress stages. */                               \
>      PIPELINE_STAGE(SWITCH, OUT, PRE_LB,       0, "ls_out_pre_lb")     \
> @@ -2599,7 +2600,7 @@ build_lswitch_flows(struct hmap *datapaths, struct
> hmap *ports,
>      struct ds actions = DS_EMPTY_INITIALIZER;
>
>      /* Build pre-ACL and ACL tables for both ingress and egress.
> -     * Ingress tables 3 and 4.  Egress tables 0 and 1. */
> +     * Ingress tables 4 and 5.  Egress tables 0 and 1. */
>      struct ovn_datapath *od;
>      HMAP_FOR_EACH (od, key_node, datapaths) {
>          if (!od->nbs) {
> @@ -2635,8 +2636,9 @@ build_lswitch_flows(struct hmap *datapaths, struct
> hmap *ports,
>
>      /* Logical switch ingress table 0: Ingress port security - L2
>       *  (priority 50).
> -     *  Ingress table 1: Ingress port security - IP (priority 90 and 80)
> -     *  Ingress table 2: Ingress port security - ND (priority 90 and 80)
> +     *  Ingress table 1: Ingress port dscp     - IP (priority 100)
> +     *  Ingress table 2: Ingress port security - IP (priority 90 and 80)
> +     *  Ingress table 3: Ingress port security - ND (priority 90 and 80)
>       */
>      struct ovn_port *op;
>      HMAP_FOR_EACH (op, key_node, ports) {
> @@ -2664,24 +2666,38 @@ build_lswitch_flows(struct hmap *datapaths, struct
> hmap *ports,
>          ovn_lflow_add(lflows, op->od, S_SWITCH_IN_PORT_SEC_L2, 50,
>                        ds_cstr(&match), ds_cstr(&actions));
>
> +        const char *dscp = smap_get(&op->sb->options, "qos_dscp");
> +        if (dscp) {
> +            struct ds dscp_actions = DS_EMPTY_INITIALIZER;
> +            struct ds dscp_match = DS_EMPTY_INITIALIZER;
> +
> +            ds_put_format(&dscp_match, "inport == %s", op->json_key);
> +            ds_put_format(&dscp_actions, "ip.dscp = %s; next;", dscp);
> +            ovn_lflow_add(lflows, op->od, S_SWITCH_IN_PORT_DSCP, 100,
> +                          ds_cstr(&dscp_match), ds_cstr(&dscp_actions));
> +            ds_destroy(&dscp_match);
> +            ds_destroy(&dscp_actions);
> +        }
> +
>          if (op->nbsp->n_port_security) {
>              build_port_security_ip(P_IN, op, lflows);
>              build_port_security_nd(op, lflows);
>          }
>      }
>
> -    /* Ingress table 1 and 2: Port security - IP and ND, by default goto
> next.
> -     * (priority 0)*/
> +    /* Ingress table 1, 2 and 3: Port dscp and security - IP and ND,
> +     * by default goto next. (priority 0) */
>      HMAP_FOR_EACH (od, key_node, datapaths) {
>          if (!od->nbs) {
>              continue;
>          }
>
> +        ovn_lflow_add(lflows, od, S_SWITCH_IN_PORT_DSCP, 0, "1", "next;");
>          ovn_lflow_add(lflows, od, S_SWITCH_IN_PORT_SEC_ND, 0, "1",
> "next;");
>          ovn_lflow_add(lflows, od, S_SWITCH_IN_PORT_SEC_IP, 0, "1",
> "next;");
>      }
>
> -    /* Ingress table 9: ARP/ND responder, skip requests coming from
> localnet
> +    /* Ingress table 10: ARP/ND responder, skip requests coming from
> localnet
>       * ports. (priority 100). */
>      HMAP_FOR_EACH (op, key_node, ports) {
>          if (!op->nbsp) {
> @@ -2696,7 +2712,7 @@ build_lswitch_flows(struct hmap *datapaths, struct
> hmap *ports,
>          }
>      }
>
> -    /* Ingress table 9: ARP/ND responder, reply for known IPs.
> +    /* Ingress table 10: ARP/ND responder, reply for known IPs.
>       * (priority 50). */
>      HMAP_FOR_EACH (op, key_node, ports) {
>          if (!op->nbsp) {
> @@ -2767,7 +2783,7 @@ build_lswitch_flows(struct hmap *datapaths, struct
> hmap *ports,
>          }
>      }
>
> -    /* Ingress table 9: ARP/ND responder, by default goto next.
> +    /* Ingress table 10: ARP/ND responder, by default goto next.
>       * (priority 0)*/
>      HMAP_FOR_EACH (od, key_node, datapaths) {
>          if (!od->nbs) {
> @@ -2777,7 +2793,7 @@ build_lswitch_flows(struct hmap *datapaths, struct
> hmap *ports,
>          ovn_lflow_add(lflows, od, S_SWITCH_IN_ARP_ND_RSP, 0, "1",
> "next;");
>      }
>
> -    /* Logical switch ingress table 10 and 11: DHCP options and response
> +    /* Logical switch ingress table 11 and 12 DHCP options and response
>           * priority 100 flows. */
>      HMAP_FOR_EACH (op, key_node, ports) {
>          if (!op->nbsp) {
> @@ -2856,7 +2872,7 @@ build_lswitch_flows(struct hmap *datapaths, struct
> hmap *ports,
>          }
>      }
>
> -    /* Ingress table 10 and 11: DHCP options and response, by default
> goto next.
> +    /* Ingress table 11 and 12: DHCP options and response, by default
> goto next.
>       * (priority 0). */
>
>      HMAP_FOR_EACH (od, key_node, datapaths) {
> @@ -2868,7 +2884,7 @@ build_lswitch_flows(struct hmap *datapaths, struct
> hmap *ports,
>          ovn_lflow_add(lflows, od, S_SWITCH_IN_DHCP_RESPONSE, 0, "1",
> "next;");
>      }
>
> -    /* Ingress table 12: Destination lookup, broadcast and multicast
> handling
> +    /* Ingress table 13: Destination lookup, broadcast and multicast
> handling
>       * (priority 100). */
>      HMAP_FOR_EACH (op, key_node, ports) {
>          if (!op->nbsp) {
> @@ -2888,7 +2904,7 @@ build_lswitch_flows(struct hmap *datapaths, struct
> hmap *ports,
>                        "outport = \""MC_FLOOD"\"; output;");
>      }
>
> -    /* Ingress table 12: Destination lookup, unicast handling (priority
> 50), */
> +    /* Ingress table 13: Destination lookup, unicast handling (priority
> 50), */
>      HMAP_FOR_EACH (op, key_node, ports) {
>          if (!op->nbsp) {
>              continue;
> @@ -2935,7 +2951,7 @@ build_lswitch_flows(struct hmap *datapaths, struct
> hmap *ports,
>          }
>      }
>
> -    /* Ingress table 12: Destination lookup for unknown MACs (priority
> 0). */
> +    /* Ingress table 13: Destination lookup for unknown MACs (priority
> 0). */
>      HMAP_FOR_EACH (od, key_node, datapaths) {
>          if (!od->nbs) {
>              continue;
> diff --git a/ovn/ovn-nb.xml b/ovn/ovn-nb.xml
> index 42dfa4f..9d7f71a 100644
> --- a/ovn/ovn-nb.xml
> +++ b/ovn/ovn-nb.xml
> @@ -301,6 +301,12 @@
>            If set, indicates the maximum burst size for data sent from this
>            interface, in bits.
>          </column>
> +
> +        <column name="options" key="qos_dscp">
> +          If set, indicates the DSCP code to be marked on the packets
> ingressing
> +          the switch from the VIF interface. Value should be in the range
> of
> +          0 to 63 (inclusive).
> +        </column>
>        </group>
>      </group>
>
> diff --git a/ovn/ovn-sb.xml b/ovn/ovn-sb.xml
> index 6c7e60b..721aa08 100644
> --- a/ovn/ovn-sb.xml
> +++ b/ovn/ovn-sb.xml
> @@ -1889,6 +1889,11 @@ tcp.flags = RST;
>          interface, in bits.
>        </column>
>
> +      <column name="options" key="qos_dscp">
> +        If set, indicates the DSCP code to be marked on the packets
> egressing
> +        the VIF interface. Value should be in the range of 0 to 63
> (inclusive).
> +      </column>
> +
>        <column name="options" key="qdisc_queue_id"
>                type='{"type": "integer", "minInteger": 1, "maxInteger":
> 61440}'>
>          Indicates the queue number on the physical device. This is same
> as the
> diff --git a/tests/ovn.at b/tests/ovn.at
> index 4d75cae..1a7a4a0 100644
> --- a/tests/ovn.at
> +++ b/tests/ovn.at
> @@ -5006,3 +5006,76 @@ AT_CHECK([ovn-sbctl find MAC_Binding], [0], [])
>  OVN_CLEANUP([hv1])
>
>  AT_CLEANUP
> +
> +AT_SETUP([ovn -- DSCP marking - check NB to SB DB transfer])
> +AT_KEYWORDS([ovn])
> +ovn_start
> +
> +# Configure the Northbound database
> +ovn-nbctl ls-add lsw0
> +
> +ovn-nbctl lsp-add lsw0 lp1
> +ovn-nbctl lsp-set-addresses lp1 "f0:00:00:00:00:01 1.1.1.1"
> +
> +ovn-nbctl set Logical_Switch_Port lp1 options:mark_dscp=34
> +AT_CHECK([ovn-sbctl get Port_Binding lp1 options:mark_dscp], [0], ["34"
> +])
> +AT_CLEANUP
> +
> +AT_SETUP([ovn -- DSCP marking check])
> +AT_KEYWORDS([ovn])
> +ovn_start
> +
> +# Configure the Northbound database
> +ovn-nbctl ls-add lsw0
> +
> +ovn-nbctl lsp-add lsw0 lp1
> +ovn-nbctl lsp-add lsw0 lp2
> +ovn-nbctl lsp-set-addresses lp1 f0:00:00:00:00:01
> +ovn-nbctl lsp-set-addresses lp2 f0:00:00:00:00:02
> +ovn-nbctl lsp-set-port-security lp1 f0:00:00:00:00:01
> +ovn-nbctl lsp-set-port-security lp2 f0:00:00:00:00:02
> +net_add n1
> +sim_add hv
> +as hv
> +ovs-vsctl add-br br-phys
> +ovn_attach n1 br-phys 192.168.0.1
> +ovs-vsctl add-port br-int vif1 -- set Interface vif1
> external-ids:iface-id=lp1 options:tx_pcap=vif1-tx.pcap
> options:rxq_pcap=vif1-rx.pcap ofport-request=1
> +ovs-vsctl add-port br-int vif2 -- set Interface vif2
> external-ids:iface-id=lp2 options:tx_pcap=vif2-tx.pcap
> options:rxq_pcap=vif2-rx.pcap ofport-request=2
> +
> +# check at L2
> +AT_CHECK([ovs-appctl ofproto/trace br-int 'in_port=1,dl_src=f0:00:00:00:
> 00:01,dl_dst=f0:00:00:00:00:02' -generate], [0], [stdout])
> +AT_CHECK([grep "Final flow:" stdout], [0],[Final flow:
> reg13=0x2,reg14=0x1,reg15=0x2,metadata=0x1,in_port=1,vlan_
> tci=0x0000,dl_src=f0:00:00:00:00:01,dl_dst=f0:00:00:00:00:
> 02,dl_type=0x0000
> +])
> +
> +# check at L3 without dscp marking
> +AT_CHECK([ovs-appctl ofproto/trace br-int 'in_port=1,dl_src=f0:00:00:00:
> 00:01,dl_dst=f0:00:00:00:00:02,dl_type=0x800,nw_src=1.1.1.1,nw_dst=1.1.1.2'
> -generate], [0], [stdout])
> +AT_CHECK([grep "Final flow:" stdout], [0],[Final flow:
> ip,reg13=0x2,reg14=0x1,reg15=0x2,metadata=0x1,in_port=1,
> vlan_tci=0x0000,dl_src=f0:00:00:00:00:01,dl_dst=f0:00:00:
> 00:00:02,nw_src=1.1.1.1,nw_dst=1.1.1.2,nw_proto=0,nw_tos=
> 0,nw_ecn=0,nw_ttl=0
> +])
> +
> +# Mark DSCP with a valid value
> +ovn-nbctl set Logical_Switch_Port lp1 options:qos_dscp=48
> +AT_CHECK([ovs-appctl ofproto/trace br-int 'in_port=1,dl_src=f0:00:00:00:
> 00:01,dl_dst=f0:00:00:00:00:02,dl_type=0x800,nw_src=1.1.1.1,nw_dst=1.1.1.2'
> -generate], [0], [stdout])
> +AT_CHECK([grep "Final flow:" stdout], [0],[Final flow:
> ip,reg13=0x2,reg14=0x1,reg15=0x2,metadata=0x1,in_port=1,
> vlan_tci=0x0000,dl_src=f0:00:00:00:00:01,dl_dst=f0:00:00:
> 00:00:02,nw_src=1.1.1.1,nw_dst=1.1.1.2,nw_proto=0,nw_tos=
> 192,nw_ecn=0,nw_ttl=0
> +])
> +
> +# Update the DSCP marking
> +ovn-nbctl set Logical_Switch_Port lp1 options:qos_dscp=63
> +AT_CHECK([ovs-appctl ofproto/trace br-int 'in_port=1,dl_src=f0:00:00:00:
> 00:01,dl_dst=f0:00:00:00:00:02,dl_type=0x800,nw_src=1.1.1.1,nw_dst=1.1.1.2'
> -generate], [0], [stdout])
> +AT_CHECK([grep "Final flow:" stdout], [0],[Final flow:
> ip,reg13=0x2,reg14=0x1,reg15=0x2,metadata=0x1,in_port=1,
> vlan_tci=0x0000,dl_src=f0:00:00:00:00:01,dl_dst=f0:00:00:
> 00:00:02,nw_src=1.1.1.1,nw_dst=1.1.1.2,nw_proto=0,nw_tos=
> 252,nw_ecn=0,nw_ttl=0
> +])
> +
> +# Mark DSCP with invalid value
> +ovn-nbctl set Logical_Switch_Port lp1 options:qos_dscp=64
> +AT_CHECK([ovs-appctl ofproto/trace br-int 'in_port=1,dl_src=f0:00:00:00:
> 00:01,dl_dst=f0:00:00:00:00:02,dl_type=0x800,nw_src=1.1.1.1,nw_dst=1.1.1.2'
> -generate], [0], [stdout])
> +AT_CHECK([grep "Final flow:" stdout], [0],[Final flow:
> ip,reg13=0x2,reg14=0x1,reg15=0x2,metadata=0x1,in_port=1,
> vlan_tci=0x0000,dl_src=f0:00:00:00:00:01,dl_dst=f0:00:00:
> 00:00:02,nw_src=1.1.1.1,nw_dst=1.1.1.2,nw_proto=0,nw_tos=
> 0,nw_ecn=0,nw_ttl=0
> +])
> +
> +# Disable DSCP marking
> +ovn-nbctl clear Logical_Switch_Port lp1 options
> +AT_CHECK([ovs-appctl ofproto/trace br-int 'in_port=1,dl_src=f0:00:00:00:
> 00:01,dl_dst=f0:00:00:00:00:02,dl_type=0x800,nw_src=1.1.1.1,nw_dst=1.1.1.2'
> -generate], [0], [stdout])
> +AT_CHECK([grep "Final flow:" stdout], [0],[Final flow:
> ip,reg13=0x2,reg14=0x1,reg15=0x2,metadata=0x1,in_port=1,
> vlan_tci=0x0000,dl_src=f0:00:00:00:00:01,dl_dst=f0:00:00:
> 00:00:02,nw_src=1.1.1.1,nw_dst=1.1.1.2,nw_proto=0,nw_tos=
> 0,nw_ecn=0,nw_ttl=0
> +])
> +
> +OVN_CLEANUP([hv])
> +AT_CLEANUP
> --
> 1.9.1
>
> _______________________________________________
> dev mailing list
> dev at openvswitch.org
> http://openvswitch.org/mailman/listinfo/dev
>



More information about the dev mailing list