Faster Up-scaling Based on Consumed Read/Write #227

Closed
jusgat opened this Issue Dec 10, 2014 · 5 comments

Projects

None yet

3 participants

@jusgat
jusgat commented Dec 10, 2014

I just started using Dynamic DynamoDB, and it's grand! Already I see a lot of benefit from it.

This is not an issue, but rather a feature. I have very steep, unpredictable consumed read/write activity after periods of not much activity at all - such that I've encountered a number of times now where my table's provisioned reads is at, say, 50, and consumption jumps up to, say, 250. At present, Dymamic DymamoDB will use my increase-reads-with 100% to increase the provisioned reads to 100, but then has to wait another 5 minute cycle to see that consumption is still above the threshold and increase by another 100%, and so on.

My question is, since Amazon no longer limits increases to 100% of the current provisioned amount, can Dynamic DynamoDB use the consumed percentage to modify the increase necessary to place the consumption under the upper threshold in a single step?

Using the numbers of my earlier example, if my table has 50 provisioned, and my consumption jumps to 250, Dynamic DynamoDB knows that consumption is at 500% and an increase of 100% won't cover it. One way to solve this is to loop through the possible increases until the right one to implement is found - 50 increase by 100% = 100; no; 100 increase by 100% = 200; no; 200 increase by 100% = 400; yes!

With a feature like this, I would be able to keep my tables' minimum provisioned amounts very low for the periods of time they're not being used much, but then have them scale up to the correct amount in a single cycle when steep-sided plateaus of activity start.

Again, absolutely brilliant work. Thank you for your time.

@jusgat
jusgat commented Dec 10, 2014

Another way to solve this is to simply take the consumed percentage - 500% in my example - multiply that by the current provisioned amount - 50 - and divide it by the upper threshold - let's say 80% - to get the new provisioned - 313.

@sebdah sebdah added this to the Feature request pool milestone Dec 18, 2014
@sebdah sebdah assigned sebdah and unassigned sebdah Dec 18, 2014
@sebdah sebdah added a commit that referenced this issue Dec 20, 2014
@sebdah Fixed parameter order #227 080e1d6
@sebdah sebdah self-assigned this Dec 20, 2014
@sebdah
Owner
sebdah commented Dec 20, 2014

Thanks for the feature request. This is a good idea and me and @pragnesh are currently looking into this. Testing is needed.

@sebdah sebdah modified the milestone: 1.20.x, Feature request pool Dec 22, 2014
@sebdah sebdah closed this Dec 22, 2014
@sebdah
Owner
sebdah commented Dec 22, 2014

This is now fixed and released in version 1.20.0. Thanks for the feature request @jusgat and thanks for the pull request @pragnesh!

Happy holidays!

@jusgat
jusgat commented Dec 22, 2014

Really awesome! Thanks for the good news. Can't wait to get back from
vacation and start using 1.20.0.
On Dec 22, 2014 12:27 PM, "Sebastian Dahlgren" notifications@github.com
wrote:

This is now fixed and released in version 1.20.0. Thanks for the feature
request @jusgat https://github.com/jusgat and thanks for the pull
request @pragnesh https://github.com/pragnesh!

Happy holidays!


Reply to this email directly or view it on GitHub
#227 (comment)
.

@Sazpaimon
Contributor

Bumping this issue because I think it needs to be documented. I ran into weird behavior when I was implementing the granular upscaling, where I had the following settings:

increase-consumed-writes-scale = {80: 7, 85: 14, 91: 22, 97: 30, 104: 39, 111: 48, 118: 58, 126: 68, 134: 79, 143: 91, 152: 103}

If I saw something like 400% utilization I was seeing a ridiculously high increase in throughput that I could not understand until I saw this code. Had I have known that I would have had a different increase at 100%

Sign up for free to join this conversation on GitHub. Already have an account? Sign in to comment