[ovs-dev] [PATCH v3 2/2] ovn-northd: Add logical flows to support native DNS

Guru Shetty guru at ovn.org
Mon Mar 27 16:15:03 UTC 2017


On 27 March 2017 at 07:13, <nusiddiq at redhat.com> wrote:

> From: Numan Siddique <nusiddiq at redhat.com>
>
> OVN implements native DNS resolution which can be used to resolve the
> internal DNS names belonging to a logical datapath.
>
> To support this, a new table 'DNS' is added in the NB DB. A new column
> 'dns_lookups' is added in 'Logical_Switch' table which references to the
> 'DNS' table.
>
> Following flows are added for each logical switch which has atleast one
> DNS entry in the 'dns_lookups' column
>  - A logical flow in DNS_LOOKUP stage which uses the action 'dns_lookup'
>    to transform the DNS query to DNS reply packet and advances
>    to the next stage - DNS_RESPONSE.
>
>  - A logical flow in DNS_RESPONSE stage which implements the DNS responder
>    by sending the DNS reply from previous stage back to the inport.
>
> Signed-off-by: Numan Siddique <nusiddiq at redhat.com>
> ---
>  ovn/northd/ovn-northd.8.xml |  88 ++++++++++-
>  ovn/northd/ovn-northd.c     | 150 ++++++++++++++++++-
>  ovn/ovn-nb.ovsschema        |  16 +-
>  ovn/ovn-nb.xml              |  20 ++-
>  ovn/utilities/ovn-nbctl.c   |   3 +
>  tests/ovn.at                | 353 ++++++++++++++++++++++++++++++
> ++++++++++++++
>  6 files changed, 618 insertions(+), 12 deletions(-)
>
> diff --git a/ovn/northd/ovn-northd.8.xml b/ovn/northd/ovn-northd.8.xml
> index ab8fd88..a3fe2a8 100644
> --- a/ovn/northd/ovn-northd.8.xml
> +++ b/ovn/northd/ovn-northd.8.xml
> @@ -724,7 +724,73 @@ output;
>        </li>
>      </ul>
>
> -    <h3>Ingress Table 13 Destination Lookup</h3>
> +    <h3>Ingress Table 13 DNS Lookup</h3>
> +
> +    <p>
> +      This table looks up and resolves the DNS names of the logical ports
> +      if configured with the host names.
> +    </p>
> +
> +    <ul>
> +      <li>
> +        <p>
> +          A priority-100 logical flow for each logical switch data path
> +          if at least one of its logical port is configured with
> +          <code>hostname</code>  which matches the IPv4 and IPv6 packets
> with
> +          <code>udp.src</code> = 53 and applies the action
> +          <code>dns_lkup</code> and advances the packet to the next table.
> +        </p>
> +
> +        <pre>
> +reg0[4] = dns_lkup(); next;
> +        </pre>
> +
> +        <p>
> +          For valid DNS packets, this transforms the packet into a DNS
> +          reply if the DNS name can be resolved, and stores 1 into
> reg0[4].
> +          For failed DNS resolution or other kinds of packets, it just
> stores
> +          0 into reg0[4]. Either way, it continues to the next table.
> +        </p>
> +      </li>
> +    </ul>
> +
> +    <h3>Ingress Table 14 DNS Responses</h3>
> +
> +    <p>
> +      This table implements DNS responder for the DNS replies generated by
> +      the previous table.
> +    </p>
> +
> +    <ul>
> +      <li>
> +        <p>
> +          A priority-100 logical flow for each logical switch data path
> +          if at least one of its logical port is configured with
> +          <code>hostname</code> which matches IPv4 and IPv6 packets with
> +          <code>udp.src == 53 &amp;&amp; reg0[4] == 1</code> and responds
> +          back to the <code>inport</code> after applying these
> +          actions.  If <code>reg0[4]</code> is set to 1, it means that the
> +          action <code>dns_lkup</code> was successful.
> +        </p>
> +
> +        <pre>
> +eth.dst &lt;&#45;&gt; eth.src;
> +ip4.src &lt;&#45;&gt; ip4.dst;
> +udp.dst = udp.src;
> +udp.src = 53;
> +outport = <var>P</var>;
> +flags.loopback = 1;
> +output;
> +        </pre>
> +
> +        <p>
> +          (This terminates ingress packet processing; the packet does not
> go
> +           to the next ingress table.)
> +        </p>
> +      </li>
> +    </ul>
> +
> +    <h3>Ingress Table 15 Destination Lookup</h3>
>
>      <p>
>        This table implements switching behavior.  It contains these logical
> @@ -834,11 +900,23 @@ output;
>      </p>
>
>      <p>
> -      Also a priority 34000 logical flow is added for each logical port
> which
> -      has DHCPv4 options defined to allow the DHCPv4 reply packet and
> which has
> -      DHCPv6 options defined to allow the DHCPv6 reply packet from the
> -      <code>Ingress Table 12: DHCP responses</code>.
> +      Also the following flows are added.
>      </p>
> +    <ul>
> +      <li>
> +        A Priority 34000 logical flow is added for each logical port which
> +        has DHCPv4 options defined to allow the DHCPv4 reply packet and
> which has
> +        DHCPv6 options defined to allow the DHCPv6 reply packet from the
> +        <code>Ingress Table 12: DHCP responses</code>.
> +      </li>
> +
> +      <li>
> +        A Priority 34000 logical flow for each logical switch data path
> if at
> +        least one of its logical port is configured with hostname
> +        which allows the DNS reply packet from the
> +        <code>Ingress Table 14:DNS responses</code>.
> +      </li>
> +    </ul>
>
>      <h3>Egress Table 7: Egress Port Security - IP</h3>
>
> diff --git a/ovn/northd/ovn-northd.c b/ovn/northd/ovn-northd.c
> index 8c8f16b..d125651 100644
> --- a/ovn/northd/ovn-northd.c
> +++ b/ovn/northd/ovn-northd.c
> @@ -112,7 +112,9 @@ enum ovn_stage {
>      PIPELINE_STAGE(SWITCH, IN,  ARP_ND_RSP,    10, "ls_in_arp_rsp")
>  \
>      PIPELINE_STAGE(SWITCH, IN,  DHCP_OPTIONS,  11, "ls_in_dhcp_options")
> \
>      PIPELINE_STAGE(SWITCH, IN,  DHCP_RESPONSE, 12, "ls_in_dhcp_response")
> \
> -    PIPELINE_STAGE(SWITCH, IN,  L2_LKUP,       13, "ls_in_l2_lkup")
>  \
> +    PIPELINE_STAGE(SWITCH, IN,  DNS_LOOKUP,      13, "ls_in_dns_lookup") \
> +    PIPELINE_STAGE(SWITCH, IN,  DNS_RESPONSE,  14, "ls_in_dns_response") \
> +    PIPELINE_STAGE(SWITCH, IN,  L2_LKUP,       15, "ls_in_l2_lkup")
>  \
>                                                                        \
>      /* Logical switch egress stages. */                               \
>      PIPELINE_STAGE(SWITCH, OUT, PRE_LB,       0, "ls_out_pre_lb")     \
> @@ -160,6 +162,7 @@ enum ovn_stage {
>  #define REGBIT_CONNTRACK_COMMIT "reg0[1]"
>  #define REGBIT_CONNTRACK_NAT    "reg0[2]"
>  #define REGBIT_DHCP_OPTS_RESULT "reg0[3]"
> +#define REGBIT_DNS_LOOKUP_RESULT "reg0[4]"
>
>  /* Register definitions for switches and routers. */
>  #define REGBIT_NAT_REDIRECT     "reg9[0]"
> @@ -2815,7 +2818,13 @@ build_acls(struct ovn_datapath *od, struct hmap
> *lflows)
>      }
>
>      /* Add 34000 priority flow to allow DHCP reply from ovn-controller to
> all
> -     * logical ports of the datapath if the CMS has configured DHCPv4
> options*/
> +     * logical ports of the datapath if the CMS has configured DHCPv4
> options.
> +     *
> +     * Add one 34000 priority flow to allow DNS reply from ovn-controller
> to all
> +     * logical ports of the datapath if the CMS has configured DNS
> parameters
> +     * for atleast one logical port.
> +     * */
> +    bool dns_flow_added = false;
>      for (size_t i = 0; i < od->nbs->n_ports; i++) {
>          if (od->nbs->ports[i]->dhcpv4_options) {
>              const char *server_id = smap_get(
> @@ -2865,6 +2874,16 @@ build_acls(struct ovn_datapath *od, struct hmap
> *lflows)
>                  ds_destroy(&match);
>              }
>          }
> +
> +        if (!dns_flow_added && smap_get(&od->nbs->ports[i]->options,
> +                                        "hostname")) {
> +            const char *actions = has_stateful ? "ct_commit; next;" :
> +                "next;";
> +            ovn_lflow_add(
> +                lflows, od, S_SWITCH_OUT_ACL, 34000, "udp && udp.src ==
> 53",
> +                actions);
> +            dns_flow_added = true;
> +        }
>      }
>  }
>
> @@ -3303,8 +3322,43 @@ build_lswitch_flows(struct hmap *datapaths, struct
> hmap *ports,
>          }
>      }
>
> +    /* Logical switch ingress table 13 and 14: DNS lookup and response
> +     * priority 100 flows.*/
> +    HMAP_FOR_EACH (od, key_node, datapaths) {
> +        if (!od->nbs || !od->nbs->n_dns_lookups) {
> +           continue;
> +        }
> +
> +        struct ds match;
> +        struct ds action;
> +        ds_init(&match);
> +        ds_init(&action);
> +        ds_put_cstr(&match, "ip && udp.dst == 53");
> +        ds_put_format(&action,
> +                      REGBIT_DNS_LOOKUP_RESULT" = dns_lookup(); next;");
> +        ovn_lflow_add(lflows, od, S_SWITCH_IN_DNS_LOOKUP, 100,
> +                      ds_cstr(&match), ds_cstr(&action));
> +        ds_clear(&action);
> +        ds_put_cstr(&match, " && "REGBIT_DNS_LOOKUP_RESULT);
> +        ds_put_format(&action, "eth.dst <-> eth.src; ip4.src <-> ip4.dst;
> "
> +                      "udp.dst = udp.src; udp.src = 53; outport = inport;
> "
> +                      "flags.loopback = 1; output;");
> +        ovn_lflow_add(lflows, od, S_SWITCH_IN_DNS_RESPONSE, 100,
> +                      ds_cstr(&match), ds_cstr(&action));
> +        ds_clear(&action);
> +        ds_put_format(&action, "eth.dst <-> eth.src; ip6.src <-> ip6.dst;
> "
> +                      "udp.dst = udp.src; udp.src = 53; outport = inport;
> "
> +                      "flags.loopback = 1; output;");
> +        ovn_lflow_add(lflows, od, S_SWITCH_IN_DNS_RESPONSE, 100,
> +                      ds_cstr(&match), ds_cstr(&action));
> +        ds_destroy(&match);
> +        ds_destroy(&action);
> +    }
> +
>      /* Ingress table 11 and 12: DHCP options and response, by default
> goto next.
> -     * (priority 0). */
> +     * (priority 0).
> +     * Ingress table 13 and 14: DNS lookup and response, by default goto
> next.
> +     * (priority 0).*/
>
>      HMAP_FOR_EACH (od, key_node, datapaths) {
>          if (!od->nbs) {
> @@ -3313,9 +3367,11 @@ build_lswitch_flows(struct hmap *datapaths, struct
> hmap *ports,
>
>          ovn_lflow_add(lflows, od, S_SWITCH_IN_DHCP_OPTIONS, 0, "1",
> "next;");
>          ovn_lflow_add(lflows, od, S_SWITCH_IN_DHCP_RESPONSE, 0, "1",
> "next;");
> +        ovn_lflow_add(lflows, od, S_SWITCH_IN_DNS_LOOKUP, 0, "1",
> "next;");
> +        ovn_lflow_add(lflows, od, S_SWITCH_IN_DNS_RESPONSE, 0, "1",
> "next;");
>      }
>
> -    /* Ingress table 13: Destination lookup, broadcast and multicast
> handling
> +    /* Ingress table 15: Destination lookup, broadcast and multicast
> handling
>       * (priority 100). */
>      HMAP_FOR_EACH (op, key_node, ports) {
>          if (!op->nbsp) {
> @@ -5242,6 +5298,87 @@ sync_address_sets(struct northd_context *ctx)
>      }
>      shash_destroy(&sb_address_sets);
>  }
> +
> +static void
> +sync_dns_entries(struct northd_context *ctx, struct hmap *datapaths)
> +{
> +    struct dns_info {
> +        struct hmap_node hmap_node;
> +        const struct sbrec_datapath_binding *sb;
> +        const struct nbrec_dns *dns;
> +    };
> +
> +    struct hmap dns_map = HMAP_INITIALIZER(&dns_map);
> +    struct ovn_datapath *od;
> +    HMAP_FOR_EACH(od, key_node, datapaths) {
> +        if (!od->nbs || !od->nbs->n_dns_lookups) {
> +            continue;
> +        }
> +
> +        for (size_t i = 0; i < od->nbs->n_dns_lookups; i++) {
> +            struct dns_info *dns_info = xzalloc(sizeof *dns_info);
> +            dns_info->sb = od->sb;
> +            dns_info->dns = od->nbs->dns_lookups[i];
> +
> +            size_t hash = uuid_hash(&dns_info->sb->header_.uuid);
> +            hash = hash_string(dns_info->dns->hostname, hash);
> +            hmap_insert(&dns_map, &dns_info->hmap_node, hash);
> +        }
> +    }
> +
> +    const struct sbrec_dns *sbrec_dns, *next;
> +    SBREC_DNS_FOR_EACH_SAFE(sbrec_dns, next, ctx->ovnsb_idl) {
> +        size_t hash = uuid_hash(&sbrec_dns->datapath->header_.uuid);
> +        hash = hash_string(sbrec_dns->hostname, hash);
> +        bool delete_dns_record = true;
> +        struct dns_info *dns_info;
> +        HMAP_FOR_EACH_WITH_HASH(dns_info, hmap_node, hash, &dns_map) {
> +            if (!strcmp(dns_info->dns->hostname, sbrec_dns->hostname)) {
> +                /* Verify that the IP addresses are same before removing
> from
> +                 * the hmap. */
> +                if (dns_info->dns->n_ip_addresses !=
> +                        sbrec_dns->n_ip_addresses) {
> +                    continue;
> +                }
> +
> +                delete_dns_record = false;
> +                for (size_t i = 0; i < sbrec_dns->n_ip_addresses; i++) {
> +                    if (strcmp(dns_info->dns->ip_addresses[i],
> +                               sbrec_dns->ip_addresses[i])) {
> +                        delete_dns_record = true;
> +                        break;
> +                    }
> +                }
> +
> +                if (delete_dns_record) {
> +                    continue;
> +                }
> +
> +                hmap_remove(&dns_map, &dns_info->hmap_node);
> +                free(dns_info);
> +                delete_dns_record = false;
> +                break;
> +            }
> +        }
> +
> +        if (delete_dns_record) {
> +            sbrec_dns_delete(sbrec_dns);
> +        }
> +    }
> +
> +    struct dns_info *dns_info;
> +    HMAP_FOR_EACH_POP(dns_info, hmap_node, &dns_map) {
> +        struct sbrec_dns *sbrec_dns = sbrec_dns_insert(ctx->ovnsb_txn);
> +        sbrec_dns_set_datapath(sbrec_dns, dns_info->sb);
> +        sbrec_dns_set_hostname(sbrec_dns, dns_info->dns->hostname);
> +        sbrec_dns_set_ip_addresses(
> +            sbrec_dns, (const char **) dns_info->dns->ip_addresses,
> +            dns_info->dns->n_ip_addresses);
> +        free(dns_info);
> +    }
> +    hmap_destroy(&dns_map);
> +}
> +
>
>  static void
>  ovnnb_db_run(struct northd_context *ctx, struct ovsdb_idl_loop *sb_loop)
> @@ -5256,6 +5393,7 @@ ovnnb_db_run(struct northd_context *ctx, struct
> ovsdb_idl_loop *sb_loop)
>      build_lflows(ctx, &datapaths, &ports);
>
>      sync_address_sets(ctx);
> +    sync_dns_entries(ctx, &datapaths);
>
>      struct ovn_datapath *dp, *next_dp;
>      HMAP_FOR_EACH_SAFE (dp, next_dp, key_node, &datapaths) {
> @@ -5653,6 +5791,10 @@ main(int argc, char *argv[])
>      add_column_noalert(ovnsb_idl_loop.idl, &sbrec_address_set_col_name);
>      add_column_noalert(ovnsb_idl_loop.idl, &sbrec_address_set_col_
> addresses);
>
> +    ovsdb_idl_add_table(ovnsb_idl_loop.idl, &sbrec_table_dns);
> +    add_column_noalert(ovnsb_idl_loop.idl, &sbrec_dns_col_datapath);
> +    add_column_noalert(ovnsb_idl_loop.idl, &sbrec_dns_col_hostname);
> +
>      ovsdb_idl_add_table(ovnsb_idl_loop.idl, &sbrec_table_chassis);
>      ovsdb_idl_add_column(ovnsb_idl_loop.idl, &sbrec_chassis_col_nb_cfg);
>
> diff --git a/ovn/ovn-nb.ovsschema b/ovn/ovn-nb.ovsschema
> index dd0ac3d..0e97e80 100644
> --- a/ovn/ovn-nb.ovsschema
> +++ b/ovn/ovn-nb.ovsschema
> @@ -1,7 +1,7 @@
>  {
>      "name": "OVN_Northbound",
> -    "version": "5.5.0",
> -    "cksum": "2099428463 14236",
> +    "version": "5.5.1",
> +    "cksum": "436648422 14817",
>      "tables": {
>          "NB_Global": {
>              "columns": {
> @@ -45,6 +45,11 @@
>                                                    "refType": "strong"},
>                                             "min": 0,
>                                             "max": "unlimited"}},
> +                "dns_lookups": {"type": {"key": {"type": "uuid",
> +                                         "refTable": "DNS",
> +                                         "refType": "weak"},
> +                                  "min": 0,
> +                                  "max": "unlimited"}},
>                  "other_config": {
>                      "type": {"key": "string", "value": "string",
>                               "min": 0, "max": "unlimited"}},
> @@ -265,6 +270,13 @@
>                                      "max": "unlimited"},
>                                      "ephemeral": true}},
>              "indexes": [["target"]]},
> +        "DNS": {
> +            "columns": {
> +                "hostname": {"type": "string"},
> +                "ip_addresses": {"type": {"key": "string",
> +                                          "min": 0,
> +                                          "max": "unlimited"}}},
> +            "isRoot": true},
>

external_ids would be useful to have too.

The above would mean that for each logical port, we need to create a DNS
record in the DNS table. Having the schema like the above has advantages.
In the future, if we have to add new options for DNS, then it becomes
easier to extend this.  Do you foresee anything?

The disadvantage is that for each logical port (or a VIP), you should now
create a new record in DNS table and then link it to all the logical
switches in the logical topology.

An alternate schema would be to have a single DNS record for the logical
topology. i.e have a "record" column which is a map of hostname and
addresses. With this, whenever a new logical port (or a VIP in
load-balancer) is created, you just add the hostname and addresses to the
existing DNS "record" instead of creating a new DNS entry in the table and
linking it all the logical switches in the topology.

Do you have any opinion? Is one easier than the other for OpenStack?






>          "SSL": {
>              "columns": {
>                  "private_key": {"type": "string"},
> diff --git a/ovn/ovn-nb.xml b/ovn/ovn-nb.xml
> index 46a25f6..e17fdf6 100644
> --- a/ovn/ovn-nb.xml
> +++ b/ovn/ovn-nb.xml
> @@ -134,6 +134,11 @@
>        QOS marking rules that apply to packets within the logical switch.
>      </column>
>
> +    <column name="dns_lookups">
> +      DNS entries to look up for the DNS packets within the logical switch
> +      by the native DNS resolver.
> +    </column>
> +
>      <group title="other_config">
>        <p>
>          Additional configuration options for the logical switch.
> @@ -1888,6 +1893,20 @@
>        <column name="other_config"/>
>      </group>
>    </table>
> +  <table name="DNS" title="Native DNS resolution">
> +    <p>
> +      A DNS look up entry with in a Logical switch.
> +    </p>
> +
> +    <column name="hostname">
> +      The host name to be searched.
> +    </column>
> +
> +    <column name="ip_addresses">
> +      The IP addresses to include in the DNS answer fields if the
> +      <ref column="hostname"/> matches in the DNS query.
> +    </column>
> +  </table>
>    <table name="SSL">
>      SSL configuration for ovn-nb database access.
>
> @@ -1926,5 +1945,4 @@
>        <column name="external_ids"/>
>      </group>
>    </table>
> -
>  </database>
> diff --git a/ovn/utilities/ovn-nbctl.c b/ovn/utilities/ovn-nbctl.c
> index 900b088..abdc616 100644
> --- a/ovn/utilities/ovn-nbctl.c
> +++ b/ovn/utilities/ovn-nbctl.c
> @@ -3048,6 +3048,9 @@ static const struct ctl_table_class
> tables[NBREC_N_TABLES] = {
>
>      [NBREC_TABLE_SSL].row_ids[0]
>      = {&nbrec_table_nb_global, NULL, &nbrec_nb_global_col_ssl},
> +
> +    [NBREC_TABLE_DNS].row_ids[0]
> +    = {&nbrec_table_dns, NULL, &nbrec_dns_col_hostname},
>  };
>
>  static void
> diff --git a/tests/ovn.at b/tests/ovn.at
> index 4b4beb0..3df2e51 100644
> --- a/tests/ovn.at
> +++ b/tests/ovn.at
> @@ -6334,6 +6334,359 @@ OVS_APP_EXIT_AND_WAIT([ovsdb-server])
>
>  AT_CLEANUP
>
> +AT_SETUP([ovn -- dns lookup : 1 HV, 2 LS, 2 LSPs/LS])
> +AT_SKIP_IF([test $HAVE_PYTHON = no])
> +ovn_start
> +
> +ovn-nbctl ls-add ls1
> +
> +ovn-nbctl lsp-add ls1 ls1-lp1 \
> +-- lsp-set-addresses ls1-lp1 "f0:00:00:00:00:01 10.0.0.4 aef0::4"
> +
> +ovn-nbctl lsp-set-port-security ls1-lp1 "f0:00:00:00:00:01 10.0.0.4
> aef0::4"
> +
> +ovn-nbctl lsp-add ls1 ls1-lp2 \
> +-- lsp-set-addresses ls1-lp2 "f0:00:00:00:00:02 10.0.0.6 20.0.0.4"
> +
> +ovn-nbctl lsp-set-port-security ls1-lp2 "f0:00:00:00:00:02 10.0.0.6
> 20.0.0.4"
> +
> +LP1_DNS=`ovn-nbctl create DNS hostname=vm1.ovn.org \
> +ip_addresses="10.0.0.4 aef0\:\:4"`
> +
> +LP2_DNS=`ovn-nbctl create DNS hostname=vm2.ovn.org \
> +ip_addresses="10.0.0.6 20.0.0.4"`
> +
> +ovn-nbctl add Logical_switch ls1 dns_lookups "$LP1_DNS $LP2_DNS"
> +
> +net_add n1
> +sim_add hv1
> +
> +as hv1
> +ovs-vsctl add-br br-phys
> +ovn_attach n1 br-phys 192.168.0.1
> +ovs-vsctl -- add-port br-int hv1-vif1 -- \
> +    set interface hv1-vif1 external-ids:iface-id=ls1-lp1 \
> +    options:tx_pcap=hv1/vif1-tx.pcap \
> +    options:rxq_pcap=hv1/vif1-rx.pcap \
> +    ofport-request=1
> +
> +ovs-vsctl -- add-port br-int hv1-vif2 -- \
> +    set interface hv1-vif2 external-ids:iface-id=ls1-lp2 \
> +    options:tx_pcap=hv1/vif2-tx.pcap \
> +    options:rxq_pcap=hv1/vif2-rx.pcap \
> +    ofport-request=2
> +
> +ovn_populate_arp
> +sleep 2
> +as hv1 ovs-vsctl show
> +
> +echo "*************************"
> +ovn-sbctl list DNS
> +echo "*************************"
> +
> +ip_to_hex() {
> +    printf "%02x%02x%02x%02x" "$@"
> +}
> +
> +reset_pcap_file() {
> +    local iface=$1
> +    local pcap_file=$2
> +    ovs-vsctl -- set Interface $iface options:tx_pcap=dummy-tx.pcap \
> +options:rxq_pcap=dummy-rx.pcap
> +    rm -f ${pcap_file}*.pcap
> +    ovs-vsctl -- set Interface $iface options:tx_pcap=${pcap_file}-tx.pcap
> \
> +options:rxq_pcap=${pcap_file}-rx.pcap
> +}
> +
> +# set_lsp_dns_params lsp_name
> +# Sets the dns_req_data and dns_resp_data
> +set_lsp_dns_params() {
> +    local lsp_name=$1
> +    local ttl=00000e10
> +    an_count=0001
> +    type=0001
> +    case $lsp_name in
> +    ls1-lp1)
> +        # vm1.ovn.org
> +        hostname=03766d31036f766e036f726700
> +        # IPv4 address - 10.0.0.4
> +        expected_dns_answer=${hostname}00010001${ttl}00040a000004
> +        ;;
> +    ls1-lp2)
> +        # vm2.ovn.org
> +        hostname=03766d32036f766e036f726700
> +        # IPv4 address - 10.0.0.6
> +        expected_dns_answer=${hostname}00010001${ttl}00040a000006
> +        # IPv4 address - 20.0.0.4
> +        expected_dns_answer=${expected_dns_answer}${
> hostname}00010001${ttl}000414000004
> +        an_count=0002
> +        ;;
> +    ls1-lp1_ipv6_only)
> +        # vm1.ovn.org
> +        hostname=03766d31036f766e036f726700
> +        # IPv6 address - aef0::4
> +        type=001c
> +        expected_dns_answer=${hostname}${type}0001${ttl}
> 0010aef00000000000000000000000000004
> +        ;;
> +    ls1-lp1_ipv4_v6)
> +        # vm1.ovn.org
> +        hostname=03766d31036f766e036f726700
> +        type=00ff
> +        an_count=0002
> +        # IPv4 address - 10.0.0.4
> +        # IPv6 address - aef0::4
> +        expected_dns_answer=${hostname}00010001${ttl}00040a000004
> +        expected_dns_answer=${expected_dns_answer}${
> hostname}001c0001${ttl}0010
> +        expected_dns_answer=${expected_dns_answer}
> aef00000000000000000000000000004
> +        ;;
> +    ls1-lp1_invalid_type)
> +        # vm1.ovn.org
> +        hostname=03766d31036f766e036f726700
> +        # IPv6 address - aef0::4
> +        type=0002
> +        ;;
> +    ls1-lp1_incomplete)
> +        # set type to none
> +        type=''
> +    esac
> +    # TTL - 3600
> +    local dns_req_header=010201200001000000000000
> +    local dns_resp_header=010281200001${an_count}00000000
> +    dns_req_data=${dns_req_header}${hostname}${type}0001
> +    dns_resp_data=${dns_resp_header}${hostname}${type}0001$
> {expected_dns_answer}
> +}
> +
> +# This shell function sends a DNS request packet
> +# test_dns INPORT SRC_MAC DST_MAC SRC_IP DST_IP DNS_QUERY EXPEC
> +test_dns() {
> +    local inport=$1 src_mac=$2 dst_mac=$3 src_ip=$4 dst_ip=$5 dns_reply=$6
> +    local dns_query_data=$7
> +    shift; shift; shift; shift; shift; shift; shift;
> +    # Packet size => IPv4 header (20) + UDP header (8) +
> +    #                DNS data (header + query)
> +    ip_len=`expr 28 + ${#dns_query_data} / 2`
> +    udp_len=`expr $ip_len - 20`
> +    ip_len=$(printf "%x" $ip_len)
> +    udp_len=$(printf "%x" $udp_len)
> +    local request=${dst_mac}${src_mac}0800450000${ip_len}0000000080110000
> +    request=${request}${src_ip}${dst_ip}9234003500${udp_len}0000
> +    # dns data
> +    request=${request}${dns_query_data}
> +
> +    if test $dns_reply != 0; then
> +        local dns_reply=$1
> +        ip_len=`expr 28 + ${#dns_reply} / 2`
> +        udp_len=`expr $ip_len - 20`
> +        ip_len=$(printf "%x" $ip_len)
> +        udp_len=$(printf "%x" $udp_len)
> +        local reply=${src_mac}${dst_mac}0800450000${ip_len}
> 0000000080110000
> +        reply=${reply}${dst_ip}${src_ip}0035923400${udp_len}0000${
> dns_reply}
> +        echo $reply >> $inport.expected
> +    else
> +        for outport; do
> +            echo $request >> $outport.expected
> +        done
> +    fi
> +    as hv1 ovs-appctl netdev-dummy/receive hv1-vif$inport $request
> +}
> +
> +AT_CAPTURE_FILE([ofctl_monitor0.log])
> +as hv1 ovs-ofctl monitor br-int resume --detach --no-chdir \
> +--pidfile=ovs-ofctl0.pid 2> ofctl_monitor0.log
> +
> +set_lsp_dns_params ls1-lp2
> +src_ip=`ip_to_hex 10 0 0 4`
> +dst_ip=`ip_to_hex 10 0 0 1`
> +dns_reply=1
> +test_dns 1 f00000000001 f000000000f0 $src_ip $dst_ip $dns_reply
> $dns_req_data $dns_resp_data
> +
> +# NXT_RESUMEs should be 1.
> +OVS_WAIT_UNTIL([test 1 = `cat ofctl_monitor*.log | grep -c NXT_RESUME`])
> +
> +$PYTHON "$top_srcdir/utilities/ovs-pcap.in" hv1/vif1-tx.pcap > 1.packets
> +cat 1.expected | cut -c -48 > expout
> +AT_CHECK([cat 1.packets | cut -c -48], [0], [expout])
> +# Skipping the IPv4 checksum.
> +cat 1.expected | cut -c 53- > expout
> +AT_CHECK([cat 1.packets | cut -c 53-], [0], [expout])
> +
> +reset_pcap_file hv1-vif1 hv1/vif1
> +reset_pcap_file hv1-vif2 hv1/vif2
> +rm -f 1.expected
> +rm -f 2.expected
> +
> +set_lsp_dns_params ls1-lp1
> +src_ip=`ip_to_hex 10 0 0 6`
> +dst_ip=`ip_to_hex 10 0 0 1`
> +dns_reply=1
> +test_dns 2 f00000000002 f000000000f0 $src_ip $dst_ip $dns_reply
> $dns_req_data $dns_resp_data
> +
> +# NXT_RESUMEs should be 2.
> +OVS_WAIT_UNTIL([test 2 = `cat ofctl_monitor*.log | grep -c NXT_RESUME`])
> +
> +$PYTHON "$top_srcdir/utilities/ovs-pcap.in" hv1/vif2-tx.pcap > 2.packets
> +cat 2.expected | cut -c -48 > expout
> +AT_CHECK([cat 2.packets | cut -c -48], [0], [expout])
> +# Skipping the IPv4 checksum.
> +cat 2.expected | cut -c 53- > expout
> +AT_CHECK([cat 2.packets | cut -c 53-], [0], [expout])
> +
> +reset_pcap_file hv1-vif1 hv1/vif1
> +reset_pcap_file hv1-vif2 hv1/vif2
> +rm -f 1.expected
> +rm -f 2.expected
> +
> +# Clear the hostname options for ls1-lp2
> +ovn-nbctl --wait=hv clear Logical_switch ls1 dns_lookups
> +ovn-nbctl --wait=hv add Logical_switch ls1 dns_lookups $LP1_DNS
> +
> +ovn-nbctl list logical_switch
> +echo "****************"
> +ovn-nbctl list DNS
> +echo "****************"
> +ovn-sbctl list DNS
> +echo "**********"
> +
> +set_lsp_dns_params ls1-lp2
> +src_ip=`ip_to_hex 10 0 0 4`
> +dst_ip=`ip_to_hex 10 0 0 1`
> +dns_reply=0
> +test_dns 1 f00000000001 f00000000002 $src_ip $dst_ip $dns_reply
> $dns_req_data
> +
> +# NXT_RESUMEs should be 3.
> +OVS_WAIT_UNTIL([test 3 = `cat ofctl_monitor*.log | grep -c NXT_RESUME`])
> +
> +$PYTHON "$top_srcdir/utilities/ovs-pcap.in" hv1/vif1-tx.pcap > 1.packets
> +AT_CHECK([cat 1.packets], [0], [])
> +
> +reset_pcap_file hv1-vif1 hv1/vif1
> +reset_pcap_file hv1-vif2 hv1/vif2
> +rm -f 1.expected
> +rm -f 2.expected
> +
> +# Clear the hostname for ls1-lp1
> +# Since no ports of ls1 has hostname configued,
> +# ovn-northd should not add the DNS flows.
> +ovn-nbctl clear Logical_switch ls1 dns_lookups
> +set_lsp_dns_params ls1-lp1
> +src_ip=`ip_to_hex 10 0 0 6`
> +dst_ip=`ip_to_hex 10 0 0 1`
> +dns_reply=0
> +test_dns 2 f00000000002 f000000000f0 $src_ip $dst_ip $dns_reply
> $dns_req_data
> +
> +# NXT_RESUMEs should be 3 only.
> +OVS_WAIT_UNTIL([test 3 = `cat ofctl_monitor*.log | grep -c NXT_RESUME`])
> +
> +$PYTHON "$top_srcdir/utilities/ovs-pcap.in" hv1/vif2-tx.pcap > 2.packets
> +AT_CHECK([cat 2.packets], [0], [])
> +
> +reset_pcap_file hv1-vif1 hv1/vif1
> +reset_pcap_file hv1-vif2 hv1/vif2
> +rm -f 1.expected
> +rm -f 2.expected
> +
> +# Test IPv6 (AAAA records) using IPv4 packet.
> +# Add back the DNS options for ls1-lp1.
> +ovn-nbctl add Logical_switch ls1 dns_lookups $LP1_DNS
> +
> +set_lsp_dns_params ls1-lp1_ipv6_only
> +src_ip=`ip_to_hex 10 0 0 6`
> +dst_ip=`ip_to_hex 10 0 0 1`
> +dns_reply=1
> +test_dns 2 f00000000002 f000000000f0 $src_ip $dst_ip $dns_reply
> $dns_req_data $dns_resp_data
> +
> +# NXT_RESUMEs should be 4.
> +OVS_WAIT_UNTIL([test 4 = `cat ofctl_monitor*.log | grep -c NXT_RESUME`])
> +
> +$PYTHON "$top_srcdir/utilities/ovs-pcap.in" hv1/vif2-tx.pcap > 2.packets
> +cat 2.expected | cut -c -48 > expout
> +AT_CHECK([cat 2.packets | cut -c -48], [0], [expout])
> +# Skipping the IPv4 checksum.
> +cat 2.expected | cut -c 53- > expout
> +AT_CHECK([cat 2.packets | cut -c 53-], [0], [expout])
> +
> +reset_pcap_file hv1-vif1 hv1/vif1
> +reset_pcap_file hv1-vif2 hv1/vif2
> +rm -f 1.expected
> +rm -f 2.expected
> +
> +# Test both IPv4 (A) and IPv6 (AAAA records) using IPv4 packet.
> +set_lsp_dns_params ls1-lp1_ipv4_v6
> +src_ip=`ip_to_hex 10 0 0 6`
> +dst_ip=`ip_to_hex 10 0 0 1`
> +dns_reply=1
> +test_dns 2 f00000000002 f000000000f0 $src_ip $dst_ip $dns_reply
> $dns_req_data $dns_resp_data
> +
> +# NXT_RESUMEs should be 5.
> +OVS_WAIT_UNTIL([test 5 = `cat ofctl_monitor*.log | grep -c NXT_RESUME`])
> +
> +$PYTHON "$top_srcdir/utilities/ovs-pcap.in" hv1/vif2-tx.pcap > 2.packets
> +cat 2.expected | cut -c -48 > expout
> +AT_CHECK([cat 2.packets | cut -c -48], [0], [expout])
> +# Skipping the IPv4 checksum.
> +cat 2.expected | cut -c 53- > expout
> +AT_CHECK([cat 2.packets | cut -c 53-], [0], [expout])
> +
> +reset_pcap_file hv1-vif1 hv1/vif1
> +reset_pcap_file hv1-vif2 hv1/vif2
> +rm -f 1.expected
> +rm -f 2.expected
> +
> +# Invalid type.
> +set_lsp_dns_params ls1-lp1_invalid_type
> +src_ip=`ip_to_hex 10 0 0 6`
> +dst_ip=`ip_to_hex 10 0 0 1`
> +dns_reply=0
> +test_dns 2 f00000000002 f000000000f0 $src_ip $dst_ip $dns_reply
> $dns_req_data
> +
> +# NXT_RESUMEs should be 6.
> +OVS_WAIT_UNTIL([test 6 = `cat ofctl_monitor*.log | grep -c NXT_RESUME`])
> +
> +$PYTHON "$top_srcdir/utilities/ovs-pcap.in" hv1/vif2-tx.pcap > 2.packets
> +AT_CHECK([cat 2.packets], [0], [])
> +
> +reset_pcap_file hv1-vif1 hv1/vif1
> +reset_pcap_file hv1-vif2 hv1/vif2
> +rm -f 1.expected
> +rm -f 2.expected
> +
> +# Incomplete DNS packet.
> +set_lsp_dns_params ls1-lp1_incomplete
> +src_ip=`ip_to_hex 10 0 0 6`
> +dst_ip=`ip_to_hex 10 0 0 1`
> +dns_reply=0
> +test_dns 2 f00000000002 f000000000f0 $src_ip $dst_ip $dns_reply
> $dns_req_data
> +
> +# NXT_RESUMEs should be 7.
> +OVS_WAIT_UNTIL([test 7 = `cat ofctl_monitor*.log | grep -c NXT_RESUME`])
> +
> +$PYTHON "$top_srcdir/utilities/ovs-pcap.in" hv1/vif2-tx.pcap > 2.packets
> +AT_CHECK([cat 2.packets], [0], [])
> +
> +reset_pcap_file hv1-vif1 hv1/vif1
> +reset_pcap_file hv1-vif2 hv1/vif2
> +rm -f 1.expected
> +rm -f 2.expected
> +
> +as hv1
> +OVS_APP_EXIT_AND_WAIT([ovn-controller])
> +OVS_APP_EXIT_AND_WAIT([ovs-vswitchd])
> +OVS_APP_EXIT_AND_WAIT([ovsdb-server])
> +
> +as ovn-sb
> +OVS_APP_EXIT_AND_WAIT([ovsdb-server])
> +
> +as ovn-nb
> +OVS_APP_EXIT_AND_WAIT([ovsdb-server])
> +
> +as northd
> +OVS_APP_EXIT_AND_WAIT([ovn-northd])
> +
> +as main
> +OVS_APP_EXIT_AND_WAIT([ovs-vswitchd])
> +OVS_APP_EXIT_AND_WAIT([ovsdb-server])
> +AT_CLEANUP
> +
>  AT_SETUP([ovn -- 1 LR with distributed router gateway port])
>  AT_SKIP_IF([test $HAVE_PYTHON = no])
>  ovn_start
> --
> 2.9.3
>
> _______________________________________________
> dev mailing list
> dev at openvswitch.org
> https://mail.openvswitch.org/mailman/listinfo/ovs-dev
>


More information about the dev mailing list