-
-
Notifications
You must be signed in to change notification settings - Fork 106
New issue
Have a question about this project? Sign up for a free GitHub account to open an issue and contact its maintainers and the community.
By clicking “Sign up for GitHub”, you agree to our terms of service and privacy statement. We’ll occasionally send you account related emails.
Already on GitHub? Sign in to your account
Bloating data and orioledb_data on s3 with cluster start-stops #334
Comments
at 02bc078 I.e. increase of s3 bucket is lower than previously (1.4x vs 2.1x) , but exists with just start-stops (no inserts) |
at 9c34d00
After 16 start-stops:
|
@pashkinelfe, please recheck on b225ece |
Rechecked. orioledb_undo bloating disappeared. Now /data/NN contents on s3 contains ~180Mb per checkpoint. (was 600Mb/checkpoint) It consists of two parts: (2) base/ (90 Mb) |
On s3 bucket dir structure:
|
I've initialized cluster with 10K partitions table, inserted around 6Gb tuples then did cluster start-stops without any modifications and got gradual increase of S3 directory size from 11.8Gb 174K files to 25Gb 475K files over 16 start-stops.
Directory structure of s3 (after 16 start-stops):
Local pgdata catalog size grew only insignificantly 5.9 to 6.2 Gb
Stop and start time first time were: 7m0.841s (stop) and 3m7.716 (start)
Next 15 times they stabilized at around ~1:45 (stop) and ~2:05 (start)
Checked at be17f3a (due to segfaults #333 at 234115c)
The text was updated successfully, but these errors were encountered: