You signed in with another tab or window. Reload to refresh your session.You signed out in another tab or window. Reload to refresh your session.You switched accounts on another tab or window. Reload to refresh your session.Dismiss alert
Suppose we have table (h,r,v), with h - hash key, r - range key and v - value.
And execute the following commands:
(h,r) = v1
Then in parallel:
update (h,r) = v2 if exists
delete (h)
Delete would take lock on (h) only and update would take lock on (h,r).
They will be executed in parallel, and we could end up with table that contains (h,r,v2), that should not happen.
The text was updated successfully, but these errors were encountered:
… primary keys
Summary:
Suppose we have a table `t` with columns `(h, r, v)`, where `h` is a hash key
column, `r` is a range key column, and `v` is a non-primary key column. If we
first insert a row:
INSERT INTO t (h, r, v) VALUES (h1, r1, v1);
and then execute the following two statements in parallel:
UPDATE t SET v = v2 WHERE h = h1 AND r = r1;
DELETE FROM t WHERE h = h1;
then the `DELETE` would only take a lock on the `(h1)` prefix of the primary
key, and the `UPDATE` would only take a lock on `(h1, r1)`. The conflict
between these two statements would not be detected by the lock manager, and we
could end up with a table that contains `(h, r, v2)`, which can't happen with
any one of the two possible serial orderings of these two statements. The only
valid state after running these two statements in any order is obviously a
table with no row with the primary key `(h1, r1)`: the `UPDATE` statement has
no effect if the `DELETE` statement executes first.
This diff adds weak locks for all possible parts of the full key, starting with
the hash component, and including varying numbers of range keys up to the
entire range key. E.g. if the primary key is `(h1, h2, r1, r2, r3)`, weak locks
would be taken on `(h1, h2)` (it is not possible to split the compound hash key
for the purpose of hierarchical locking), `(h1, h2, r1)`, `(h1, h2, r1, r2)`,
and `(h1, h2, r1, r2, r3)`. Just as a reminder, "weak locks" in the DocDB
terminology are locks acquired on parent nodes of the node that is actually
being modified by the operation. More details on our locking model are here:
https://docs.yugabyte.com/latest/architecture/transactions/isolation-levels/#fine-grained-locking
Also fixed an issue caught by newly added test:
GetWriteIntentsForIsolationLevel returns invalid strong intent for
SERIALIZABLE_ISOLATION
Replaced `const std::vector<PrimaryKey>&` in ctor of dockey to `std::vector<PrimaryKey>` + `std::move`, since they are always copied.
So `std::vector<PrimaryKey>` + `std::move` would avoid extra vector copy when temporary vector is used to create `DocKey`.
Test Plan:
ybd --cxx-test ql-tablet-test --gtest_filter QLTabletTest.DeleteByHashKey
ybd --cxx-test ql-tablet-test --gtest_filter QLTabletTest.DeleteByHashAndPartialRangeKey
Reviewers: timur, robert, mihnea, mikhail
Reviewed By: mikhail
Subscribers: kannan, ybase
Differential Revision: https://phabricator.dev.yugabyte.com/D5779
Suppose we have table (h,r,v), with h - hash key, r - range key and v - value.
And execute the following commands:
(h,r) = v1
Then in parallel:
update (h,r) = v2 if exists
delete (h)
Delete would take lock on (h) only and update would take lock on (h,r).
They will be executed in parallel, and we could end up with table that contains (h,r,v2), that should not happen.
The text was updated successfully, but these errors were encountered: