New issue
Have a question about this project? Sign up for a free GitHub account to open an issue and contact its maintainers and the community.
By clicking “Sign up for GitHub”, you agree to our terms of service and privacy statement. We’ll occasionally send you account related emails.
Already on GitHub? Sign in to your account
flat storage limit #140
flat storage limit #140
Conversation
There was a problem hiding this comment.
Choose a reason for hiding this comment
The reason will be displayed to describe this comment to others. Learn more.
The code itself alright to me, but I would like to challenge the mechanism a little. Currently if the user has reached their limit new blocks can still be written, but they will be empty and the block that was written as the user reached their limit will be truncated. I think this is not ideal for two reasons:
-
Users will assume a successful write means the entirety of their data was written to Kepler, not a truncated copy or an empty block. So I think the UX is better if the transaction simply fails.
-
This is still susceptible to misuse. I wouldn't put it passed someone "testing" writing data in a loop just to see hwo much they can write. In this case they will still bloat the index.
understandable, my reasoning for allowing truncated/empty blocks was perhaps misguided, I'll see about using the content-length header to determine if the allowed size would be exceeded. I previously wanted to avoid any edge cases when it comes to chunked encoding, where the content length only reflects the length of the first chunk. When it comes to delegations and invocations to read, I reckon we should still allow those even if the limit is exceeded, or session keys and stuff will fail to work past that point |
Implemented a special AsyncReader which throws an error if the data read exceeds a given limit, should cover all cases |
59efbad
to
b2eb724
Compare
Description
This PR introduces an initial flat storage limit which is configurable on a per-instance basis via the
kepler.toml
config file or corresponding env flag. The limit is intended to be an initial limit, which can in future be increased via a capabilities model. Limits are expressed as human-readable strings of the form10MiB
,1.5 GiB
etc. Currently, written content which exceeds the limit will be truncated. Once the limit is reached, or if an incoming PUT would exceed the limit, additional writes will result in413 entity too large
http response. Delegation, invocation and revocation will continue to function, the limit only applies to content written via PUT requests to the key-value store. In future the way this is applied may change, but the external API should not.Type
Diligence Checklist
(Please delete options that are not relevant)