New issue
Have a question about this project? Sign up for a free GitHub account to open an issue and contact its maintainers and the community.
By clicking “Sign up for GitHub”, you agree to our terms of service and privacy statement. We’ll occasionally send you account related emails.
Already on GitHub? Sign in to your account
How to understand the capacity #112
Comments
ECS uses a data boxcarting strategy called "chunking". Chunks are pre-allocated when ECS is first initialized. When user data is written into ECS, pre-allocated chunks are filled and ECS pre-allocates however many new chunks ECS "thinks" would be best for the future based on what it knows about the past. Capacity statistics are calculated based on allocated and pre-allocated chunks at the time statistics are requested and don't exactly reflect the actual amount of user data stored within ECS. We do this because it is a performance-enhacing heuristic that is a "good enough" representation of capacities without having to churn through the whole system to figure out the actual user data capacity numbers. My apologies if it's a bit confusing at first, but I hope this explanation helps. As for the Erasure Coding, stats showing... I don't know. I'll have to get back to you on that. |
@twincitiesguy or @jasoncwik can either of you provide input on how ECS performs capacity reporting in 3.0? |
@captntuttle Please link the associated documentation for this issue in a comment here. Thanks! |
I'm using the single node deployment and uploaded two files so far in different bucket.
1st file I uploaded is size 216.34KB. After uploaded, I checked the portal, used capacity is 84.54GB, so I think most of the size is system data.
2nd file is 2.1 GB. Later, the used capacity is 94GB. What I know there is no EC for single node deployment. So the incremental 10GB is triple of 2.1 GB plus more system data?
in dashboard it says 1.23 GB user data, but I have uploaded 2.1GB size file actually.
What is this EC means here if no EC on single node?
The text was updated successfully, but these errors were encountered: