[ovs-dev] [PATCH v3] ovsdb-idl: Avoid inconsistent IDL state with OVSDB_MONITOR_V3.

Ilya Maximets i.maximets at ovn.org
Wed Apr 22 15:13:13 UTC 2020


On 4/22/20 4:38 PM, Dumitru Ceara wrote:
> On 4/17/20 1:05 PM, Ilya Maximets wrote:
>> On 4/9/20 2:01 AM, Dumitru Ceara wrote:
>>> Assuming an ovsdb client connected to a database using OVSDB_MONITOR_V3
>>> (i.e., "monitor_cond_since" method) with the initial monitor condition
>>> MC1.
>>>
>>> Assuming the following two transactions are executed on the
>>> ovsdb-server:
>>> TXN1: "insert record R1 in table T1"
>>> TXN2: "insert record R2 in table T2"
>>>
>>> If the client's monitor condition MC1 for table T2 matches R2 then the
>>> client will receive the following update3 message:
>>> method="update3", "insert record R2 in table T2", last-txn-id=TXN2
>>>
>>> At this point, if the presence of the new record R2 in the IDL triggers
>>> the client to update its monitor condition to MC2 and add a clause for
>>> table T1 which matches R1, a monitor_cond_change message is sent to the
>>> server:
>>> method="monitor_cond_change", "clauses from MC2"
>>>
>>> In normal operation the ovsdb-server will reply with a new update3
>>> message of the form:
>>> method="update3", "insert record R1 in table T1", last-txn-id=TXN2
>>
>> Hmm. Documentation says that updates caused by monitor_cond_change request
>> are sent using update2 method.
>>
> Thanks for the review!
> 
> Right, this needs to be fixed too. I'll add it as a first patch to the
> v4 series.
> 
>>>
>>> However, if the connection drops in the meantime, this last update might
>>> get lost.
>>>
>>> It might happen that during the reconnect a new transaction happens
>>> that modifies the original record R1:
>>> TXN3: "modify record R1 in table T1"
>>>
>>> When the client reconnects, it will try to perform a fast resync by
>>> sending:
>>> method="monitor_cond_since", "clauses from MC2", last-txn-id=TXN2
>>>
>>> Because TXN2 is still in the ovsdb-server transaction history, the
>>> server replies with the changes from the most recent transactions only,
>>> i.e., TXN3:
>>> result="true", last-txbb-id=TXN3, "modify record R1 in table T1"
>>>
>>> This causes the IDL on the client in to end up in an inconsistent
>>> state because it has never seen the update that created R1.
>>>
>>> Such a scenario is described in:
>>> https://bugzilla.redhat.com/show_bug.cgi?id=1808580#c22
>>>
>>> To avoid hitting this issue we now:
>>> - clear db->last_id whenever an IDL retry happens while there is a
>>>   monitor_cond_change request in flight.
>>> - clear db->last_id whenever a monitor condition is changed while the
>>>   IDL is not in state IDL_S_MONITORING.
>>>
>>> This ensures that updates of type "insert" that happened before the last
>>> transaction known by the IDL but didn't match old monitor conditions are
>>> sent upon reconnect if the monitor condition has changed to include them
>>> in the meantime.
>>>
>>> This commit also adds a generic recovery mechanism which triggers an IDL
>>> retry with fast resync disabled in case the IDL has detected that it
>>> ended up in an inconsistent state due to other bugs in the
>>> ovsdb-server/ovsdb-idl implementation.
>>
>> Might make sense to split this patch in two. i.e. bug fix + recovery as a
>> new feature.  This should simplify backports.
>> Or do you think we need to backport recovery mechanism?
>>
> 
> 
> I think it's safe to backport the recovery mechanism too but just in
> case we decide it's not needed or too risky I'll add it as a separate
> patch in the series.
> 
>>>
>>> CC: Han Zhou <hzhou at ovn.org>
>>> CC: Andy Zhou <azhou at ovn.org>
>>> Fixes: 403a6a0cb003 ("ovsdb-idl: Fast resync from server when connection reset.")
>>> Fixes: db2b5757328c ("lib: add monitor2 support in ovsdb-idl.")
>>> Signed-off-by: Dumitru Ceara <dceara at redhat.com>
>>>
>>> ---
>>> V3:
>>> - Change commit title.
>>> - Update commit message.
>>> - Fix monitor_cond_since ovsdb-idl implementation.
>>> V2:
>>> - Address Mark's comments:
>>>   - change the error log message to reflect the action taken.
>>>   - use ovsdb_error() instead of ovsdb_syntax_error().
>>> ---
>>>  lib/ovsdb-idl.c | 68 +++++++++++++++++++++++++++++++++++++++++++++++++++++----
>>>  1 file changed, 64 insertions(+), 4 deletions(-)
>>>
>>> diff --git a/lib/ovsdb-idl.c b/lib/ovsdb-idl.c
>>> index 1535ad7..cf5db41 100644
>>> --- a/lib/ovsdb-idl.c
>>> +++ b/lib/ovsdb-idl.c
>>> @@ -324,7 +324,8 @@ static bool ovsdb_idl_process_update(struct ovsdb_idl_table *,
>>>  static bool ovsdb_idl_process_update2(struct ovsdb_idl_table *,
>>>                                        const struct uuid *,
>>>                                        const char *operation,
>>> -                                      const struct json *row);
>>> +                                      const struct json *row,
>>> +                                      bool *inconsistent);
>>>  static void ovsdb_idl_insert_row(struct ovsdb_idl_row *, const struct json *);
>>>  static void ovsdb_idl_delete_row(struct ovsdb_idl_row *);
>>>  static bool ovsdb_idl_modify_row(struct ovsdb_idl_row *, const struct json *);
>>> @@ -655,6 +656,19 @@ ovsdb_idl_state_to_string(enum ovsdb_idl_state state)
>>>  static void
>>>  ovsdb_idl_retry_at(struct ovsdb_idl *idl, const char *where)
>>>  {
>>> +    /* If there's an outstanding request of type monitor_cond_change and
>>> +     * we're in monitor_cond_since mode then we can't trust that all relevant
>>> +     * updates from transaction idl->data.last_id have been received as we
>>> +     * might have relaxed the monitor condition with our last request and
>>> +     * might be missing previously not monitored records.
>>> +     *
>>> +     * Clear last_id to make sure that the next time monitor_cond_since is
>>> +     * sent (i.e., after reconnect) we get the complete view of the database.
>>> +     */
>>> +    if (idl->request_id &&
>>> +            idl->data.monitoring == OVSDB_IDL_MONITORING_COND_SINCE) {
>>> +        memset(&idl->data.last_id, 0, sizeof idl->data.last_id);
>>
>> uuid_zero(&idl->data.last_id) ?
>>
> 
> Ack.
> 
>>> +    }
>>>      ovsdb_idl_force_reconnect(idl);
>>>      ovsdb_idl_transition_at(idl, IDL_S_RETRY, where);
>>>  }
>>> @@ -1518,6 +1532,24 @@ ovsdb_idl_db_set_condition(struct ovsdb_idl_db *db,
>>>          ovsdb_idl_condition_clone(&table->condition, condition);
>>>          db->cond_changed = table->cond_changed = true;
>>>          poll_immediate_wake();
>>> +
>>> +        /* If the change happens while the IDL is not in state
>>> +         * IDL_S_MONITORING there's no guarantee that the updated
>>> +         * condition doesn't match records that were inserted before the
>>> +         * transaction id received with the most recent update3
>>> +         * (i.e., db->last_id). If the new condition would match such a
>>> +         * record, when the reconnect is successful and "monitor_cond_since"
>>> +         * is be sent, the server will not include the updates that
>>> +         * originally created the record.
>>> +         *
>>> +         * Clear last_id to make sure that the next time monitor_cond_since
>>> +         * is sent (i.e., after reconnect) we get the complete view of the
>>> +         * database.
>>> +         */
>>> +        if (ovsdb_idl_has_ever_connected(db->idl) &&
>>> +                db->idl->state != IDL_S_MONITORING) {
>>> +            memset(&db->last_id, 0, sizeof db->last_id);
>>> +        }
>>
>>
>> IIUC, the issue that this part is trying to solve is that if we don't have
>> in-flight cond_change while re-connection but our idl has (cond_changed == true),
>> after re-connect we will request cond_since with new conditions, but old last_id,
>> because it will not be cleared during ovsdb_idl_retry().
>>
>> But for this case, I'm not sure that your fix will help, because there could be
>> plenty of state changes beetween ovsdb_idl_db_set_condition() and actual
>> ovsdb_idl_db_compose_cond_change()+send().  i.e.  db could still be connected
>> while setting new conditions, but re-connected already while actually sending them.
>>
>> As we discussed on IRC, it might be better to modify the condition inside
>> ovsdb_idl_retry() to cover this case.
>>
> 
> Right, this makes sense. However, I think there will still be a case
> that we don't cover.
> 
> If the underlying jsonrpc session reconnected we detect that in
> ovsdb_idl_run() and call ovsdb_idl_restart_fsm() but if a monitor
> condition has already been changed locally without the request being
> sent out (i.e., idl->data.cond_changed == true) we won't clear last_id
> and end up issuing a monitor_cond_since(last-txn-id=last_id). This might
> still trigger the bug.

Yes, this is an issue.  Thanks for spotting!  If reconnection was triggered by
pure network issues we could get into this state without calling ovsdb_idl_retry()
and will have same bug in the end.

> 
> Is it better to clear last_id before ovsdb_idl_restart_fsm() if
> idl->data.cond_changed == true? Or am I missing something and
> ovsdb_idl_retry() gets called in this case too?
> 

It seems like we could make chagnes to ovsdb_idl_restart_fsm() only, because
it will be called anyway from ovsdb_idl_run() regardless of whether our
reconnection was voluntary or not.

> Thanks,
> Dumitru
> 
>>>          return seqno + 1;
>>>      }
>>>  
>>> @@ -2318,10 +2350,26 @@ ovsdb_idl_db_parse_update__(struct ovsdb_idl_db *db,
>>>                      row = shash_find_data(json_object(row_update), operation);
>>>  
>>>                      if (row)  {
>>> +                        bool inconsistent = false;
>>> +
>>>                          if (ovsdb_idl_process_update2(table, &uuid, operation,
>>> -                                                      row)) {
>>> +                                                      row, &inconsistent)) {
>>>                              db->change_seqno++;
>>>                          }
>>> +
>>> +                        /* If the db is in an inconsistent state, clear the
>>> +                         * db->last_id and retry to get the complete view of
>>> +                         * the database.
>>> +                         */
>>> +                        if (inconsistent) {
>>> +                            memset(&db->last_id, 0, sizeof db->last_id);
>>> +                            ovsdb_idl_retry(db->idl);
>>> +                            return ovsdb_error(NULL,
>>> +                                               "<row_update2> received for "
>>> +                                               "inconsistent IDL: "
>>> +                                               "reconnecting IDL with "
>>> +                                               "fast resync disabled");
>>> +                        }
>>>                          break;
>>>                      }
>>>                  }
>>> @@ -2445,16 +2493,26 @@ ovsdb_idl_process_update(struct ovsdb_idl_table *table,
>>>  }
>>>  
>>>  /* Returns true if a column with mode OVSDB_IDL_MODE_RW changed, false
>>> - * otherwise. */
>>> + * otherwise.
>>> + *
>>> + * NOTE: When processing the "modify" updates, the IDL can determine that
>>> + * previous updates were missed (e.g., due to bugs) and that rows that don't
>>> + * exist locally should be updated. This indicates that the
>>> + * IDL is in an inconsistent state and, unlike in ovsdb_idl_process_update(),
>>> + * the missing rows cannot be inserted. If this is the case, 'inconsistent'
>>> + * is set to true to indicate the catastrophic failure.
>>> + */
>>>  static bool
>>>  ovsdb_idl_process_update2(struct ovsdb_idl_table *table,
>>>                            const struct uuid *uuid,
>>>                            const char *operation,
>>> -                          const struct json *json_row)
>>> +                          const struct json *json_row,
>>> +                          bool *inconsistent)
>>>  {
>>>      struct ovsdb_idl_row *row;
>>>  
>>>      row = ovsdb_idl_get_row(table, uuid);
>>> +    *inconsistent = false;
>>>      if (!strcmp(operation, "delete")) {
>>>          /* Delete row. */
>>>          if (row && !ovsdb_idl_row_is_orphan(row)) {
>>> @@ -2486,11 +2544,13 @@ ovsdb_idl_process_update2(struct ovsdb_idl_table *table,
>>>                  VLOG_WARN_RL(&semantic_rl, "cannot modify missing but "
>>>                               "referenced row "UUID_FMT" in table %s",
>>>                               UUID_ARGS(uuid), table->class_->name);
>>> +                *inconsistent = true;
>>>                  return false;
>>>              }
>>>          } else {
>>>              VLOG_WARN_RL(&semantic_rl, "cannot modify missing row "UUID_FMT" "
>>>                           "in table %s", UUID_ARGS(uuid), table->class_->name);
>>> +            *inconsistent = true;
>>>              return false;
>>>          }
>>>      } else {
>>>
>>
> 



More information about the dev mailing list