[ovs-dev] [PATCH v4 3/3] ovsdb-idl: Break into two layers.
Dumitru Ceara
dceara at redhat.com
Wed Jan 13 16:32:52 UTC 2021
On 1/13/21 2:56 AM, Ilya Maximets wrote:
> On 1/8/21 2:59 PM, Dumitru Ceara wrote:
>> On 12/19/20 3:44 AM, Ben Pfaff wrote:
>>> This change breaks the IDL into two layers: the IDL proper, whose
>>> interface to its client is unchanged, and a low-level library called
>>> the OVSDB "client synchronization" (CS) library. There are two
>>> reasons for this change. First, the IDL is big and complicated and
>>> I think that this change factors out some of that complication into
>>> a simpler lower layer. Second, the OVN northd implementation based
>>> on DDlog can benefit from the client synchronization library even
>>> though it would actually be made increasingly complicated by the IDL.
>>>
>>> Signed-off-by: Ben Pfaff <blp at ovn.org>
>>> ---
>>
>> Hi Ben,
>>
>> Overall this looks OK to me.
>>
>> OVS & OVN unit tests pass with this series applied.
>>
>> There are however a couple memory leaks and I have another small comment
>> inline.
>>
>>> lib/ovsdb-cs.c | 1955 ++++++++++++++++++++++++++++++++++
>>> lib/ovsdb-cs.h | 141 ++-
>>> lib/ovsdb-idl-provider.h | 8 +-
>>> lib/ovsdb-idl.c | 2161 +++++++++-----------------------------
>>> 4 files changed, 2563 insertions(+), 1702 deletions(-)
>
> <snip>
>
>>> +
>>> +static void
>>> +ovsdb_cs_db_destroy_tables(struct ovsdb_cs_db *db)
>>> +{
>>> + struct ovsdb_cs_db_table *table, *next;
>>> + HMAP_FOR_EACH_SAFE (table, next, hmap_node, &db->tables) {
>>> + json_destroy(table->ack_cond);
>>> + json_destroy(table->req_cond);
>>> + json_destroy(table->new_cond);
>>
>> We leak both 'table' and 'table->name' because we don't free them here.
>
> Caught these leaks too while running idl tests with AddressSanitizer.
>
>>
>>> + hmap_remove(&db->tables, &table->hmap_node);
>>> + }
>>> + hmap_destroy(&db->tables);
>>> +}
>>> +
>>> +static unsigned int
>>> +ovsdb_cs_db_set_condition(struct ovsdb_cs_db *db, const char *table,
>>> + const struct json *condition)
>>> +{
>>> + /* Compare the new condition to the last known condition which can be
>>> + * either "new" (not sent yet), "requested" or "acked", in this order. */
>>> + struct ovsdb_cs_db_table *t = ovsdb_cs_db_get_table(db, table);
>>> + const struct json *table_cond = (t->new_cond ? t->new_cond
>>> + : t->req_cond ? t->req_cond
>>> + : t->ack_cond);
>>> + if (!condition_equal(condition, table_cond)) {
>>> + json_destroy(t->new_cond);
>>> + t->new_cond = condition_clone(condition);
>>> + db->cond_changed = true;
>>> + poll_immediate_wake();
>>> + }
>>> +
>>> + /* Conditions will be up to date when we receive replies for already
>>> + * requested and new conditions, if any. */
>>> + return db->cond_seqno + (t->new_cond ? 1 : 0) + (t->req_cond ? 1 : 0);
>>> +}
>>> +
>>> +/* Sets the replication condition for 'tc' in 'cs' to 'condition' and arranges
>>> + * to send the new condition to the database server.
>>> + *
>>> + * Return the next conditional update sequence number. When this value and
>>> + * ovsdb_cs_get_condition_seqno() matches, 'cs' contains rows that match the
>>> + * 'condition'. */
>>> +unsigned int
>>> +ovsdb_cs_set_condition(struct ovsdb_cs *cs, const char *table,
>>> + const struct json *condition)
>>> +{
>>> + return ovsdb_cs_db_set_condition(&cs->data, table, condition);
>>> +}
>>> +
>>> +/* Returns a "sequence number" that represents the number of conditional
>>> + * monitoring updates successfully received by the OVSDB server of a CS
>>> + * connection.
>>> + *
>>> + * ovsdb_cs_set_condition() sets a new condition that is different from the
>>> + * current condtion, the next expected "sequence number" is returned.
>>> + *
>>> + * Whenever ovsdb_cs_get_condition_seqno() returns a value that matches the
>>> + * return value of ovsdb_cs_set_condition(), the client is assured that:
>>> + *
>>> + * - The ovsdb_cs_set_condition() changes has been acknowledged by the OVSDB
>>> + * server.
>>> + *
>>> + * - 'cs' now contains the content matches the new conditions. */
>>> +unsigned int
>>> +ovsdb_cs_get_condition_seqno(const struct ovsdb_cs *cs)
>>> +{
>>> + return cs->data.cond_seqno;
>>> +}
>>> +
>>> +static struct json *
>>> +ovsdb_cs_create_cond_change_req(const struct json *cond)
>>> +{
>>> + struct json *monitor_cond_change_request = json_object_create();
>>> + json_object_put(monitor_cond_change_request, "where", json_clone(cond));
>>> + return monitor_cond_change_request;
>>> +}
>>> +
>>> +static struct jsonrpc_msg *
>>> +ovsdb_cs_db_compose_cond_change(struct ovsdb_cs_db *db)
>>> +{
>>> + if (!db->cond_changed) {
>>> + return NULL;
>>> + }
>>> +
>>> + struct json *monitor_cond_change_requests = NULL;
>>> + struct ovsdb_cs_db_table *table;
>>> + HMAP_FOR_EACH (table, hmap_node, &db->tables) {
>>> + /* Always use the most recent conditions set by the CS client when
>>> + * requesting monitor_cond_change, i.e., table->new_cond.
>>> + */
>>> + if (table->new_cond) {
>>> + struct json *req =
>>> + ovsdb_cs_create_cond_change_req(table->new_cond);
>>> + if (req) {
>>> + if (!monitor_cond_change_requests) {
>>> + monitor_cond_change_requests = json_object_create();
>>> + }
>>> + json_object_put(monitor_cond_change_requests,
>>> + table->name,
>>> + json_array_create_1(req));
>>> + }
>>> + /* Mark the new condition as requested by moving it to req_cond.
>>> + * If there's already requested condition that's a bug.
>>> + */
>>> + ovs_assert(table->req_cond == NULL);
>>> + table->req_cond = table->new_cond;
>>> + table->new_cond = NULL;
>>> + }
>>> + }
>>> +
>>> + if (!monitor_cond_change_requests) {
>>> + return NULL;
>>> + }
>>> +
>>> + db->cond_changed = false;
>>> + struct json *params = json_array_create_3(json_clone(db->monitor_id),
>>> + json_clone(db->monitor_id),
>>> + monitor_cond_change_requests);
>>> + return jsonrpc_create_request("monitor_cond_change", params, NULL);
>>> +}
>>> +
>>> +/* Marks all requested table conditions in 'db' as acked by the server.
>>> + * It should be called when the server replies to monitor_cond_change
>>> + * requests.
>>> + */
>>> +static void
>>> +ovsdb_cs_db_ack_condition(struct ovsdb_cs_db *db)
>>> +{
>>> + struct ovsdb_cs_db_table *table;
>>> + HMAP_FOR_EACH (table, hmap_node, &db->tables) {
>>> + if (table->req_cond) {
>>> + json_destroy(table->ack_cond);
>>> + table->ack_cond = table->req_cond;
>>> + table->req_cond = NULL;
>>> + }
>>> + }
>>> +}
>>> +
>>> +/* Should be called when the CS fsm is restarted and resyncs table conditions
>>> + * based on the state the DB is in:
>>> + * - if a non-zero last_id is available for the DB then upon reconnect
>>> + * the CS should first request acked conditions to avoid missing updates
>>> + * about records that were added before the transaction with
>>> + * txn-id == last_id. If there were requested condition changes in flight
>>> + * (i.e., req_cond not NULL) and the CS client didn't set new conditions
>>> + * (i.e., new_cond is NULL) then move req_cond to new_cond to trigger a
>>> + * follow up monitor_cond_change request.
>>> + * - if there's no last_id available for the DB then it's safe to use the
>>> + * latest conditions set by the CS client even if they weren't acked yet.
>>> + */
>>> +static void
>>> +ovsdb_cs_db_sync_condition(struct ovsdb_cs_db *db)
>>> +{
>>> + bool ack_all = uuid_is_zero(&db->last_id);
>>> + if (ack_all) {
>>> + db->cond_changed = false;
>>> + }
>>> +
>>> + struct ovsdb_cs_db_table *table;
>>> + HMAP_FOR_EACH (table, hmap_node, &db->tables) {
>>> + /* When monitor_cond_since requests will be issued, the
>>> + * table->ack_cond condition will be added to the "where" clause".
>>> + * Follow up monitor_cond_change requests will use table->new_cond.
>>> + */
>>> + if (ack_all) {
>>> + if (table->new_cond) {
>>> + json_destroy(table->req_cond);
>>> + table->req_cond = table->new_cond;
>>> + table->new_cond = NULL;
>>> + }
>>> +
>>> + if (table->req_cond) {
>>> + json_destroy(table->ack_cond);
>>> + table->ack_cond = table->req_cond;
>>> + table->req_cond = NULL;
>>> + }
>>> + } else {
>>> + /* If there was no "unsent" condition but instead a
>>> + * monitor_cond_change request was in flight, move table->req_cond
>>> + * to table->new_cond and set db->cond_changed to trigger a new
>>> + * monitor_cond_change request.
>>> + *
>>> + * However, if a new condition has been set by the CS client,
>>> + * monitor_cond_change will be sent anyway and will use the most
>>> + * recent table->new_cond so there's no need to update it here.
>>> + */
>>> + if (table->req_cond) {
>>> + if (table->new_cond) {
>>> + json_destroy(table->req_cond);
>>> + } else {
>>> + table->new_cond = table->req_cond;
>>> + }
>>> + table->req_cond = NULL;
>>> + db->cond_changed = true;
>>> + }
>>
>> This used to be:
>>
>> if (table->req_cond && !table->new_cond) {
>> /* Move "req_cond" to "new_cond". */
>> ovsdb_idl_condition_move(&table->new_cond, &table->req_cond);
>> db->cond_changed = true;
>> }
>>
>> Which is what the comment above tried to explain.
>>
>> Is there a case that was missed in the old code? If so, can we factor
>> out the fix in a separate patch to make it easier to track?
>
> I suppose that this additional check here is to clear 'req_cond', so
> we will not assert inside ovsdb_cs_db_compose_cond_change().
> Previously we had ovsdb_idl_condition_move() function that took care
> of destroying the destination, but now we have no such function and
> have an assert to be sure that we're not leaking the 'req_cond'.
Ah, I see now, thanks for clarifying this!
>
> This should not actually change the logic. But I think that we need
> to rephrase part of the comment that starts with 'However' as it
> doesn't reflect the code anymore.
Agreed.
>
> Otherwise, the patch looks ok to me.
>
>
> On the side note: don't we need to increase 'cond_seqno' here since
> we're practically dropping one of the condition requests? User might
> wait for this sequence number, but 'req_cond' will be cleared and
> never be acked.
I think you're right, we should be increasing 'cond_seqno' to match the
logic in ovsdb_cs_db_set_condition() (or ovsdb_idl_db_set_condition() in
the old code).
I can take care of this as a follow up patch as it seems to be
pre-existing issue of the IDL code.
>
> Best regards, Ilya Maximets.
>
Regards,
Dumitru
More information about the dev
mailing list