Skip to content
New issue

Have a question about this project? Sign up for a free GitHub account to open an issue and contact its maintainers and the community.

By clicking “Sign up for GitHub”, you agree to our terms of service and privacy statement. We’ll occasionally send you account related emails.

Already on GitHub? Sign in to your account

flat storage limit #140

Merged
merged 20 commits into from Jul 5, 2023
Merged

flat storage limit #140

merged 20 commits into from Jul 5, 2023

Conversation

chunningham
Copy link
Member

@chunningham chunningham commented Apr 3, 2023

Description

This PR introduces an initial flat storage limit which is configurable on a per-instance basis via the kepler.toml config file or corresponding env flag. The limit is intended to be an initial limit, which can in future be increased via a capabilities model. Limits are expressed as human-readable strings of the form 10MiB, 1.5 GiB etc. Currently, written content which exceeds the limit will be truncated. Once the limit is reached, or if an incoming PUT would exceed the limit, additional writes will result in 413 entity too large http response. Delegation, invocation and revocation will continue to function, the limit only applies to content written via PUT requests to the key-value store. In future the way this is applied may change, but the external API should not.

Type

  • Bug fix (non-breaking change which fixes an issue)
  • New feature (non-breaking change which adds functionality)
  • Breaking change (fix or feature that would cause existing functionality to not work as expected)

Diligence Checklist

(Please delete options that are not relevant)

  • This change requires a documentation update
  • I have included unit tests
  • I have updated and/or included new integration tests
  • I have updated and/or included new end-to-end tests
  • I have performed a self-review of my code
  • I have commented my code, particularly in hard-to-understand areas
  • I have made corresponding changes to the documentation
  • My changes generate no new warnings
  • Any dependent changes have been merged and published in downstream modules

Copy link
Collaborator

@cobward cobward left a comment

Choose a reason for hiding this comment

The reason will be displayed to describe this comment to others. Learn more.

The code itself alright to me, but I would like to challenge the mechanism a little. Currently if the user has reached their limit new blocks can still be written, but they will be empty and the block that was written as the user reached their limit will be truncated. I think this is not ideal for two reasons:

  • Users will assume a successful write means the entirety of their data was written to Kepler, not a truncated copy or an empty block. So I think the UX is better if the transaction simply fails.

  • This is still susceptible to misuse. I wouldn't put it passed someone "testing" writing data in a loop just to see hwo much they can write. In this case they will still bloat the index.

@chunningham
Copy link
Member Author

chunningham commented Apr 4, 2023

understandable, my reasoning for allowing truncated/empty blocks was perhaps misguided, I'll see about using the content-length header to determine if the allowed size would be exceeded. I previously wanted to avoid any edge cases when it comes to chunked encoding, where the content length only reflects the length of the first chunk. When it comes to delegations and invocations to read, I reckon we should still allow those even if the limit is exceeded, or session keys and stuff will fail to work past that point

@chunningham
Copy link
Member Author

Implemented a special AsyncReader which throws an error if the data read exceeds a given limit, should cover all cases

@chunningham chunningham merged commit 694ed55 into main Jul 5, 2023
12 checks passed
@chunningham chunningham deleted the feat/storage-limits branch July 5, 2023 14:51
Sign up for free to join this conversation on GitHub. Already have an account? Sign in to comment
Labels
None yet
Projects
None yet
Development

Successfully merging this pull request may close these issues.

None yet

3 participants