-
Notifications
You must be signed in to change notification settings - Fork 848
New issue
Have a question about this project? Sign up for a free GitHub account to open an issue and contact its maintainers and the community.
By clicking “Sign up for GitHub”, you agree to our terms of service and privacy statement. We’ll occasionally send you account related emails.
Already on GitHub? Sign in to your account
Fix segfault when deleting from compressed chunk #5643
Conversation
Codecov Report
@@ Coverage Diff @@
## main #5643 +/- ##
==========================================
- Coverage 90.98% 90.96% -0.03%
==========================================
Files 230 230
Lines 54460 54446 -14
==========================================
- Hits 49550 49525 -25
- Misses 4910 4921 +11
... and 6 files with indirect coverage changes 📣 We’re building smart automated test selection to slash your CI/CD build times. Learn more |
@mkindahl, @gayyappan: please review this pull request.
|
There was a problem hiding this comment.
Choose a reason for hiding this comment
The reason will be displayed to describe this comment to others. Learn more.
Needs tests that cover this specific issue.
3323e1a
to
5cc02a9
Compare
For segfault there don't seem to be any testcase for now. However i found testcase which results in unwanted decompression on join tables. For this i have added testcase. |
5cc02a9
to
f943c00
Compare
There was a problem hiding this comment.
Choose a reason for hiding this comment
The reason will be displayed to describe this comment to others. Learn more.
This only fixes part of the issues mentioned by the ticket and does not adress the problems with JOINS.
There was a problem hiding this comment.
Choose a reason for hiding this comment
The reason will be displayed to describe this comment to others. Learn more.
Not sure why you disable the changelog check. You need to add one eventually anyway.
92d2efb
to
fe9e57c
Compare
Fixed. |
fe9e57c
to
4b9beca
Compare
For segfault, iam unable to get a testcase. However now i save filters for each chunk separately and process each chunk against its own filters so, the issue of segfault occurring due to mismatch in attribute number should not occur. |
4b9beca
to
e6e3fa7
Compare
We don't need changelog entry since the bug is not present in any released version. |
e6e3fa7
to
5e97343
Compare
5e97343
to
c3497be
Compare
0d92431
to
dc9f185
Compare
dc9f185
to
bf8068b
Compare
bf8068b
to
f376085
Compare
f376085
to
6b9476c
Compare
3c25745
to
04e1a1a
Compare
There was a problem hiding this comment.
Choose a reason for hiding this comment
The reason will be displayed to describe this comment to others. Learn more.
Looks good, we can postpone compression test refactoring in a followup PR.
There was a problem hiding this comment.
Choose a reason for hiding this comment
The reason will be displayed to describe this comment to others. Learn more.
Some bugs will need to be fixed in followup PRs but approving to unblock other followup PRs.
There was a problem hiding this comment.
Choose a reason for hiding this comment
The reason will be displayed to describe this comment to others. Learn more.
I was thinking that I've already posted this....can be handled later as a follow up as well
scankeys = | ||
build_update_delete_scankeys(&decompressor, filters, &num_scankeys, &null_columns); | ||
} | ||
if (decompress_batches(&decompressor, |
There was a problem hiding this comment.
Choose a reason for hiding this comment
The reason will be displayed to describe this comment to others. Learn more.
what's the return value contract of this method? (its not documented...)
...but anyway - I think it would be better to instead of passing a writeable bool ptr; have that as return value?
note that: currently I see 3 return-s in this method; from which:
- 2 doesn't free scankeys
- not necessarily an issue - I think you can just remove the
if+pfree
- not necessarily an issue - I think you can just remove the
- 1 may not end with an
ERROR
; and can exit the method without closing theheapScan
- what could be the consequences of that?
...and I now wonder why that method tries to mask concurrent deletes/updates on the compressed table - instead of erroring out straight away? as it is now it may leave the table in an inconsistent state...
if (IsolationUsesXactSnapshot())
ereport(ERROR,
(errcode(ERRCODE_T_R_SERIALIZATION_FAILURE),
errmsg("could not serialize access due to concurrent update")));
During UPDATE/DELETE on compressed hypertables, we iterate over plan tree to collect all scan nodes. For each scan nodes there can be filter conditions. Prior to this patch we collect only first filter condition and use for first chunk which may be wrong. In this patch as and when we encounter a target scan node, we immediatly process those chunks. Fixes timescale#5640
04e1a1a
to
6feab56
Compare
Automated backport to 2.10.x not done: cherry-pick failed. Git status
|
During UPDATE/DELETE on compressed hypertables, we iterate over plan
tree to collect all scan nodes. For each scan nodes there can be
filter conditions.
Prior to this patch we collect only first filter condition and use
for first chunk which may be wrong. In this patch as and when we
encounter a target scan node, we immediately process those chunks.
Fixes #5640