New issue
Have a question about this project? Sign up for a free GitHub account to open an issue and contact its maintainers and the community.
By clicking “Sign up for GitHub”, you agree to our terms of service and privacy statement. We’ll occasionally send you account related emails.
Already on GitHub? Sign in to your account
Fix crash for concurrent drop and compress chunk #2688
Merged
erimatnor
merged 1 commit into
timescale:master
from
erimatnor:fix-drop-chunks-compress-crash
Nov 30, 2020
Merged
Changes from all commits
Commits
File filter
Filter by extension
Conversations
Failed to load comments.
Jump to
Jump to file
Failed to load files.
Diff view
Diff view
There are no files selected for viewing
This file contains bidirectional Unicode text that may be interpreted or compiled differently than what appears below. To review, open the file in an editor that reveals hidden Unicode characters.
Learn more about bidirectional Unicode characters
This file contains bidirectional Unicode text that may be interpreted or compiled differently than what appears below. To review, open the file in an editor that reveals hidden Unicode characters.
Learn more about bidirectional Unicode characters
This file contains bidirectional Unicode text that may be interpreted or compiled differently than what appears below. To review, open the file in an editor that reveals hidden Unicode characters.
Learn more about bidirectional Unicode characters
237 changes: 237 additions & 0 deletions
237
tsl/test/isolation/expected/deadlock_drop_chunks_compress.out
This file contains bidirectional Unicode text that may be interpreted or compiled differently than what appears below. To review, open the file in an editor that reveals hidden Unicode characters.
Learn more about bidirectional Unicode characters
Original file line number | Diff line number | Diff line change |
---|---|---|
@@ -0,0 +1,237 @@ | ||
Parsed test spec with 2 sessions | ||
|
||
starting permutation: s1_drop s1_commit s2_compress_chunk_1 s2_compress_chunk_2 s2_commit | ||
There was a problem hiding this comment. Choose a reason for hiding this commentThe reason will be displayed to describe this comment to others. Learn more. This iterates over all possible step order combinations, right? There was a problem hiding this comment. Choose a reason for hiding this commentThe reason will be displayed to describe this comment to others. Learn more. Unless a test specifies explicit permutations, it will run all permutations. |
||
step s1_drop: | ||
SELECT count (*) | ||
FROM drop_chunks('conditions', older_than => '2018-12-03 00:00'::timestamptz); | ||
|
||
count | ||
|
||
2 | ||
step s1_commit: COMMIT; | ||
step s2_compress_chunk_1: | ||
SELECT count(compress_chunk(chunk)) | ||
FROM (SELECT chunk FROM chunks_to_compress ORDER BY 1 ASC LIMIT 1) AS chunk; | ||
|
||
ERROR: chunk not found | ||
step s2_compress_chunk_2: | ||
SELECT count(compress_chunk(chunk)) | ||
FROM (SELECT chunk FROM chunks_to_compress ORDER BY 1 DESC LIMIT 1) AS chunk; | ||
|
||
ERROR: current transaction is aborted, commands ignored until end of transaction block | ||
step s2_commit: COMMIT; | ||
|
||
starting permutation: s1_drop s2_compress_chunk_1 s1_commit s2_compress_chunk_2 s2_commit | ||
step s1_drop: | ||
SELECT count (*) | ||
FROM drop_chunks('conditions', older_than => '2018-12-03 00:00'::timestamptz); | ||
|
||
count | ||
|
||
2 | ||
step s2_compress_chunk_1: | ||
SELECT count(compress_chunk(chunk)) | ||
FROM (SELECT chunk FROM chunks_to_compress ORDER BY 1 ASC LIMIT 1) AS chunk; | ||
<waiting ...> | ||
step s1_commit: COMMIT; | ||
step s2_compress_chunk_1: <... completed> | ||
error in steps s1_commit s2_compress_chunk_1: ERROR: chunk deleted by other transaction | ||
step s2_compress_chunk_2: | ||
SELECT count(compress_chunk(chunk)) | ||
FROM (SELECT chunk FROM chunks_to_compress ORDER BY 1 DESC LIMIT 1) AS chunk; | ||
|
||
ERROR: current transaction is aborted, commands ignored until end of transaction block | ||
step s2_commit: COMMIT; | ||
|
||
starting permutation: s1_drop s2_compress_chunk_1 s2_compress_chunk_2 s1_commit s2_commit | ||
step s1_drop: | ||
SELECT count (*) | ||
FROM drop_chunks('conditions', older_than => '2018-12-03 00:00'::timestamptz); | ||
|
||
count | ||
|
||
2 | ||
step s2_compress_chunk_1: | ||
SELECT count(compress_chunk(chunk)) | ||
FROM (SELECT chunk FROM chunks_to_compress ORDER BY 1 ASC LIMIT 1) AS chunk; | ||
<waiting ...> | ||
step s2_compress_chunk_1: <... completed> | ||
ERROR: canceling statement due to lock timeout | ||
step s2_compress_chunk_2: | ||
SELECT count(compress_chunk(chunk)) | ||
FROM (SELECT chunk FROM chunks_to_compress ORDER BY 1 DESC LIMIT 1) AS chunk; | ||
|
||
ERROR: current transaction is aborted, commands ignored until end of transaction block | ||
step s1_commit: COMMIT; | ||
step s2_commit: COMMIT; | ||
|
||
starting permutation: s1_drop s2_compress_chunk_1 s2_compress_chunk_2 s2_commit s1_commit | ||
step s1_drop: | ||
SELECT count (*) | ||
FROM drop_chunks('conditions', older_than => '2018-12-03 00:00'::timestamptz); | ||
|
||
count | ||
|
||
2 | ||
step s2_compress_chunk_1: | ||
SELECT count(compress_chunk(chunk)) | ||
FROM (SELECT chunk FROM chunks_to_compress ORDER BY 1 ASC LIMIT 1) AS chunk; | ||
<waiting ...> | ||
step s2_compress_chunk_1: <... completed> | ||
ERROR: canceling statement due to lock timeout | ||
step s2_compress_chunk_2: | ||
SELECT count(compress_chunk(chunk)) | ||
FROM (SELECT chunk FROM chunks_to_compress ORDER BY 1 DESC LIMIT 1) AS chunk; | ||
|
||
ERROR: current transaction is aborted, commands ignored until end of transaction block | ||
step s2_commit: COMMIT; | ||
step s1_commit: COMMIT; | ||
|
||
starting permutation: s2_compress_chunk_1 s1_drop s1_commit s2_compress_chunk_2 s2_commit | ||
step s2_compress_chunk_1: | ||
SELECT count(compress_chunk(chunk)) | ||
FROM (SELECT chunk FROM chunks_to_compress ORDER BY 1 ASC LIMIT 1) AS chunk; | ||
|
||
count | ||
|
||
1 | ||
step s1_drop: | ||
SELECT count (*) | ||
FROM drop_chunks('conditions', older_than => '2018-12-03 00:00'::timestamptz); | ||
<waiting ...> | ||
step s1_drop: <... completed> | ||
ERROR: some chunks could not be read since they are being concurrently updated | ||
step s1_commit: COMMIT; | ||
step s2_compress_chunk_2: | ||
SELECT count(compress_chunk(chunk)) | ||
FROM (SELECT chunk FROM chunks_to_compress ORDER BY 1 DESC LIMIT 1) AS chunk; | ||
|
||
count | ||
|
||
1 | ||
step s2_commit: COMMIT; | ||
|
||
starting permutation: s2_compress_chunk_1 s1_drop s2_compress_chunk_2 s1_commit s2_commit | ||
step s2_compress_chunk_1: | ||
SELECT count(compress_chunk(chunk)) | ||
FROM (SELECT chunk FROM chunks_to_compress ORDER BY 1 ASC LIMIT 1) AS chunk; | ||
|
||
count | ||
|
||
1 | ||
step s1_drop: | ||
SELECT count (*) | ||
FROM drop_chunks('conditions', older_than => '2018-12-03 00:00'::timestamptz); | ||
<waiting ...> | ||
step s2_compress_chunk_2: | ||
SELECT count(compress_chunk(chunk)) | ||
FROM (SELECT chunk FROM chunks_to_compress ORDER BY 1 DESC LIMIT 1) AS chunk; | ||
|
||
count | ||
|
||
1 | ||
step s1_drop: <... completed> | ||
ERROR: some chunks could not be read since they are being concurrently updated | ||
step s1_commit: COMMIT; | ||
step s2_commit: COMMIT; | ||
|
||
starting permutation: s2_compress_chunk_1 s1_drop s2_compress_chunk_2 s2_commit s1_commit | ||
step s2_compress_chunk_1: | ||
SELECT count(compress_chunk(chunk)) | ||
FROM (SELECT chunk FROM chunks_to_compress ORDER BY 1 ASC LIMIT 1) AS chunk; | ||
|
||
count | ||
|
||
1 | ||
step s1_drop: | ||
SELECT count (*) | ||
FROM drop_chunks('conditions', older_than => '2018-12-03 00:00'::timestamptz); | ||
<waiting ...> | ||
step s2_compress_chunk_2: | ||
SELECT count(compress_chunk(chunk)) | ||
FROM (SELECT chunk FROM chunks_to_compress ORDER BY 1 DESC LIMIT 1) AS chunk; | ||
|
||
count | ||
|
||
1 | ||
step s2_commit: COMMIT; | ||
step s1_drop: <... completed> | ||
count | ||
|
||
2 | ||
step s1_commit: COMMIT; | ||
|
||
starting permutation: s2_compress_chunk_1 s2_compress_chunk_2 s1_drop s1_commit s2_commit | ||
step s2_compress_chunk_1: | ||
SELECT count(compress_chunk(chunk)) | ||
FROM (SELECT chunk FROM chunks_to_compress ORDER BY 1 ASC LIMIT 1) AS chunk; | ||
|
||
count | ||
|
||
1 | ||
step s2_compress_chunk_2: | ||
SELECT count(compress_chunk(chunk)) | ||
FROM (SELECT chunk FROM chunks_to_compress ORDER BY 1 DESC LIMIT 1) AS chunk; | ||
|
||
count | ||
|
||
1 | ||
step s1_drop: | ||
SELECT count (*) | ||
FROM drop_chunks('conditions', older_than => '2018-12-03 00:00'::timestamptz); | ||
<waiting ...> | ||
step s1_drop: <... completed> | ||
ERROR: some chunks could not be read since they are being concurrently updated | ||
step s1_commit: COMMIT; | ||
step s2_commit: COMMIT; | ||
|
||
starting permutation: s2_compress_chunk_1 s2_compress_chunk_2 s1_drop s2_commit s1_commit | ||
step s2_compress_chunk_1: | ||
SELECT count(compress_chunk(chunk)) | ||
FROM (SELECT chunk FROM chunks_to_compress ORDER BY 1 ASC LIMIT 1) AS chunk; | ||
|
||
count | ||
|
||
1 | ||
step s2_compress_chunk_2: | ||
SELECT count(compress_chunk(chunk)) | ||
FROM (SELECT chunk FROM chunks_to_compress ORDER BY 1 DESC LIMIT 1) AS chunk; | ||
|
||
count | ||
|
||
1 | ||
step s1_drop: | ||
SELECT count (*) | ||
FROM drop_chunks('conditions', older_than => '2018-12-03 00:00'::timestamptz); | ||
<waiting ...> | ||
step s2_commit: COMMIT; | ||
step s1_drop: <... completed> | ||
count | ||
|
||
2 | ||
step s1_commit: COMMIT; | ||
|
||
starting permutation: s2_compress_chunk_1 s2_compress_chunk_2 s2_commit s1_drop s1_commit | ||
step s2_compress_chunk_1: | ||
SELECT count(compress_chunk(chunk)) | ||
FROM (SELECT chunk FROM chunks_to_compress ORDER BY 1 ASC LIMIT 1) AS chunk; | ||
|
||
count | ||
|
||
1 | ||
step s2_compress_chunk_2: | ||
SELECT count(compress_chunk(chunk)) | ||
FROM (SELECT chunk FROM chunks_to_compress ORDER BY 1 DESC LIMIT 1) AS chunk; | ||
|
||
count | ||
|
||
1 | ||
step s2_commit: COMMIT; | ||
step s1_drop: | ||
SELECT count (*) | ||
FROM drop_chunks('conditions', older_than => '2018-12-03 00:00'::timestamptz); | ||
|
||
count | ||
|
||
2 | ||
step s1_commit: COMMIT; |
This file contains bidirectional Unicode text that may be interpreted or compiled differently than what appears below. To review, open the file in an editor that reveals hidden Unicode characters.
Learn more about bidirectional Unicode characters
Oops, something went wrong.
Add this suggestion to a batch that can be applied as a single commit.
This suggestion is invalid because no changes were made to the code.
Suggestions cannot be applied while the pull request is closed.
Suggestions cannot be applied while viewing a subset of changes.
Only one suggestion per line can be applied in a batch.
Add this suggestion to a batch that can be applied as a single commit.
Applying suggestions on deleted lines is not supported.
You must change the existing code in this line in order to create a valid suggestion.
Outdated suggestions cannot be applied.
This suggestion has been applied or marked resolved.
Suggestions cannot be applied from pending reviews.
Suggestions cannot be applied on multi-line comments.
Suggestions cannot be applied while the pull request is queued to merge.
Suggestion cannot be applied right now. Please check back later.
There was a problem hiding this comment.
Choose a reason for hiding this comment
The reason will be displayed to describe this comment to others. Learn more.
Do we need to initialize this field somewhere as well before
table_tuple_lock
orheap_lock_tuple
, since it was used only on stack before?There was a problem hiding this comment.
Choose a reason for hiding this comment
The reason will be displayed to describe this comment to others. Learn more.
It is usually embedded in a larger struct, which, AFAIK, is initialized properly in the places where used. Still, even if this was garbage, it is not something that one should look at unless lockresult tells you to, in which case lockfd should also be set accordingly.
There was a problem hiding this comment.
Choose a reason for hiding this comment
The reason will be displayed to describe this comment to others. Learn more.
Sure, I was mostly thinking about tools like valgrind which might bring additional warnings, etc. Anyway, I don't think that it is necessary to change anything