Skip to content
New issue

Have a question about this project? Sign up for a free GitHub account to open an issue and contact its maintainers and the community.

By clicking “Sign up for GitHub”, you agree to our terms of service and privacy statement. We’ll occasionally send you account related emails.

Already on GitHub? Sign in to your account

Faster Up-scaling Based on Consumed Read/Write #227

Closed
jusgat opened this issue Dec 10, 2014 · 5 comments
Closed

Faster Up-scaling Based on Consumed Read/Write #227

jusgat opened this issue Dec 10, 2014 · 5 comments

Comments

@jusgat
Copy link

jusgat commented Dec 10, 2014

I just started using Dynamic DynamoDB, and it's grand! Already I see a lot of benefit from it.

This is not an issue, but rather a feature. I have very steep, unpredictable consumed read/write activity after periods of not much activity at all - such that I've encountered a number of times now where my table's provisioned reads is at, say, 50, and consumption jumps up to, say, 250. At present, Dymamic DymamoDB will use my increase-reads-with 100% to increase the provisioned reads to 100, but then has to wait another 5 minute cycle to see that consumption is still above the threshold and increase by another 100%, and so on.

My question is, since Amazon no longer limits increases to 100% of the current provisioned amount, can Dynamic DynamoDB use the consumed percentage to modify the increase necessary to place the consumption under the upper threshold in a single step?

Using the numbers of my earlier example, if my table has 50 provisioned, and my consumption jumps to 250, Dynamic DynamoDB knows that consumption is at 500% and an increase of 100% won't cover it. One way to solve this is to loop through the possible increases until the right one to implement is found - 50 increase by 100% = 100; no; 100 increase by 100% = 200; no; 200 increase by 100% = 400; yes!

With a feature like this, I would be able to keep my tables' minimum provisioned amounts very low for the periods of time they're not being used much, but then have them scale up to the correct amount in a single cycle when steep-sided plateaus of activity start.

Again, absolutely brilliant work. Thank you for your time.

@jusgat
Copy link
Author

jusgat commented Dec 10, 2014

Another way to solve this is to simply take the consumed percentage - 500% in my example - multiply that by the current provisioned amount - 50 - and divide it by the upper threshold - let's say 80% - to get the new provisioned - 313.

@sebdah
Copy link
Owner

sebdah commented Dec 20, 2014

Thanks for the feature request. This is a good idea and me and @pragnesh are currently looking into this. Testing is needed.

@sebdah
Copy link
Owner

sebdah commented Dec 22, 2014

This is now fixed and released in version 1.20.0. Thanks for the feature request @jusgat and thanks for the pull request @pragnesh!

Happy holidays!

@jusgat
Copy link
Author

jusgat commented Dec 22, 2014

Really awesome! Thanks for the good news. Can't wait to get back from
vacation and start using 1.20.0.
On Dec 22, 2014 12:27 PM, "Sebastian Dahlgren" notifications@github.com
wrote:

This is now fixed and released in version 1.20.0. Thanks for the feature
request @jusgat https://github.com/jusgat and thanks for the pull
request @pragnesh https://github.com/pragnesh!

Happy holidays!


Reply to this email directly or view it on GitHub
#227 (comment)
.

@Sazpaimon
Copy link
Contributor

Bumping this issue because I think it needs to be documented. I ran into weird behavior when I was implementing the granular upscaling, where I had the following settings:

increase-consumed-writes-scale = {80: 7, 85: 14, 91: 22, 97: 30, 104: 39, 111: 48, 118: 58, 126: 68, 134: 79, 143: 91, 152: 103}

If I saw something like 400% utilization I was seeing a ridiculously high increase in throughput that I could not understand until I saw this code. Had I have known that I would have had a different increase at 100%

Sign up for free to join this conversation on GitHub. Already have an account? Sign in to comment
Projects
None yet
Development

No branches or pull requests

3 participants