Skip to content
New issue

Have a question about this project? Sign up for a free GitHub account to open an issue and contact its maintainers and the community.

By clicking “Sign up for GitHub”, you agree to our terms of service and privacy statement. We’ll occasionally send you account related emails.

Already on GitHub? Sign in to your account

400KB snapshot limit #83

Open
alexmnyc opened this issue Jul 1, 2019 · 6 comments
Open

400KB snapshot limit #83

alexmnyc opened this issue Jul 1, 2019 · 6 comments

Comments

@alexmnyc
Copy link

alexmnyc commented Jul 1, 2019

Short description

400KB snapshot state limit should be handled by sharding item payload across multiple dynamodb documents

Details

For example, if an 800KB snapshot state cannot fit into 400KB, it should be saved into 2 dynamodb items and joined back together by the snapshot journal....

@teroxik
Copy link
Contributor

teroxik commented Jul 9, 2019

Is this a feature request? Shouldn't be really very difficult to add it.

@alexmnyc
Copy link
Author

Correct, it would be a feature request. Thank you

@coreyoconnor
Copy link
Member

Interestingly a similar situation arose for an amazon library of an abstraction over dynamodb. In that case, the library persisted large blobs to an S3 bucket. All logical records were then physically implemented by a single dynamodb record plus, optionally, a single S3 record. Compared to the dynamic number of dynamodb records that would be required for a ddb only solution.

Not terribly difficult to manage the resulting N dynamodb records. A bit of a chore: have to make sure partial writes have expected behavior etc. Might be other advantages to N records tho.

@alexmnyc
Copy link
Author

alexmnyc commented Jul 30, 2019

Yeah. That would make it agnostic of the payload size and would be a nice concern to eliminate.

@alexmnyc alexmnyc reopened this Jul 30, 2019
@coreyoconnor
Copy link
Member

this design would result in the features being:

  • 400 kb limit for DDB only
    • prerequisites: IAM with DDB access; DDB table creation x 2
  • no limit for DDB + S3
    • prerequisites: IAM with DDB and S3 access; DDB table creation x 2; S3 bucket creation; S3 access policy (? might have ok default)

Always a concern when additional operations are required but the cost/benefit looks appropriate to me.

@alexmnyc
Copy link
Author

Agreed

Sign up for free to join this conversation on GitHub. Already have an account? Sign in to comment
Projects
None yet
Development

No branches or pull requests

3 participants