-
-
Notifications
You must be signed in to change notification settings - Fork 93
New issue
Have a question about this project? Sign up for a free GitHub account to open an issue and contact its maintainers and the community.
By clicking “Sign up for GitHub”, you agree to our terms of service and privacy statement. We’ll occasionally send you account related emails.
Already on GitHub? Sign in to your account
Make partition deletion resilient against oversize #2431
Conversation
05f8967
to
283cc84
Compare
This still needs a fix for the index statistics. |
There was a problem hiding this comment.
Choose a reason for hiding this comment
The reason will be displayed to describe this comment to others. Learn more.
Looks good to me from a high-level perspective: When we attempt to erase partitions that are corrupted we try to guess the store file and erase that; and when we attempt to start VAST and encounter an oversized partition we try to fix the DB state by reimporting the data.
It should actually be possible to create an unit test for this by creating a regular partition, adjusting the seek position to >= 2G, and verifying that the data is queryable after the index has restarted.
6c3d7a9
to
c7f6a6f
Compare
c7f6a6f
to
9cef2a9
Compare
361d664
to
30a3dd7
Compare
This reverts commit c70bc5d. The copy assignment operator takes its argument by copy as well, which means that the move assignemnt operator is an ambiguous overload.
Partition files over 2 GB are unreadable, we now try to repair the data by reimporting them from the corresponding store file if one exists. We don't have the same mechanism for archive backed data, which is still lost.
... and update a TODO message
a80a0a4
to
2262218
Compare
5b0d70d
to
b40cf5b
Compare
We now reingest data from oversized partitions on startup.
📝 Checklist
🎯 Review Instructions
Testing instructions: You can increase the size of a partition files manually beyond the fbs size limit, then start VAST:
After that sequence the database should be fully repaired.