New issue

Have a question about this project? Sign up for a free GitHub account to open an issue and contact its maintainers and the community.

By clicking “Sign up for GitHub”, you agree to our terms of service and privacy statement. We’ll occasionally send you account related emails.

Already on GitHub? Sign in to your account

Feature/issue 158 provisioning alerts when consumed capacity exceeds throughput alarm threshold #174

Merged
merged 10 commits into from Jun 11, 2014

Conversation

Projects
None yet
3 participants
@Jofen
Contributor

Jofen commented May 20, 2014

As per discussed here: #158.

@@ -61,17 +63,19 @@
'sns_message_types': []
},
'gsi': {
'reads-alarm-threshold': 90,
'writes-alarm-threshold': 90,

This comment has been minimized.

@sebdah

sebdah May 21, 2014

Owner

Note to self:

  • Look at whether this should be defaulted to 0 or not

This comment has been minimized.

@Jofen

Jofen May 21, 2014

Contributor

I guess the default can be 0, if it's 0, the alarm will never get triggered.

@@ -29,17 +29,19 @@
'log_config_file': None
},
'table': {
'reads-alarm-threshold': 90,
'writes-alarm-threshold': 90,

This comment has been minimized.

@sebdah

sebdah May 21, 2014

Owner

Note to self:

  • Look at whether this should be defaulted to 0 or not
'enable_reads_autoscaling': True,
'enable_writes_autoscaling': True,
'reads_lower_threshold': 30,
'reads_upper_threshold': 90,
'reads_upper_threshold': 70,

This comment has been minimized.

@sebdah

sebdah May 21, 2014

Owner

Note to self:

  • Look at why this default is changed

This comment has been minimized.

@Jofen

Jofen May 21, 2014

Contributor

The reason I change this default is that alarm threshold should normally be higher than the upper threshold.

'throttled_reads_upper_threshold': 0,
'increase_reads_with': 50,
'decrease_reads_with': 50,
'increase_reads_unit': 'percent',
'decrease_reads_unit': 'percent',
'writes_lower_threshold': 30,
'writes_upper_threshold': 90,
'writes_upper_threshold': 70,

This comment has been minimized.

@sebdah

sebdah May 21, 2014

Owner

Note to self:

  • Look at why this default is changed
@@ -44,6 +44,8 @@ Important note: The table name is treated as a regular expression. That means th
========================================== ==== ==========================================
Option Type Comment
========================================== ==== ==========================================
reads-alarm-threshold int How many percent of the reads capacity should be used before trigging the throughput alarm?
writes-alarm-threshold int How many percent of the writes capacity should be used before trigging the throughput alarm?

This comment has been minimized.

@sebdah

sebdah May 21, 2014

Owner

I think that we should implement those as:

  • reads-upper-alarm-threshold
  • reads-lower-alarm-threshold
  • writes-upper-alarm-threshold
  • writes-lower-alarm-threshold

This comment has been minimized.

@Jofen

Jofen May 21, 2014

Contributor

Why lower-alarm-threshold? I would only be interested to get alert when the throughput exceed my threshold, like the basic throughput alarm.

@sebdah

This comment has been minimized.

Owner

sebdah commented May 21, 2014

Thank you @Jofen for a very well written pull request! It's a pleasure to see it :).

I have a few comments on the request that I would like to get sorted before merging and releasing:

  1. I think we should implement this functionality for both reads/writes higher and lower than certain thresholds. See my diff comment.
  2. I don't think that we need to lower the default threshold (see comment 1 and comment 2). This should be changed to 90 again, to ensure backwards compatibility. Also example config etc needs to be updated accordingly.
  3. As this feature is disabled per default we should change the default value to 0, see comment 1 and comment 2.
@Jofen

This comment has been minimized.

Contributor

Jofen commented May 21, 2014

Hello,

Agree with #2 and #3. For #1, what exactly is the purpose of a lower alarm threshold?

My reason for adding this feature is as follows:

  1. A verification that dynamic-dynamodb is doing a great job so I would never exceed the alarm threshold
  2. Since dynamic-dynamodb use Cloudwatch to monitor table and Dynamodb send data to Cloudwatch every 5 minutes. Also, the change of table provisioning can take up to a couple of minutes to update. If there is a sudden traffic spike that dynamic-dynamodb couldn't scale up fast enough to avoid throttled requests situations. The user can get alerted, and take actions accordingly.

Jiaofen

@sebdah

This comment has been minimized.

Owner

sebdah commented May 21, 2014

Thanks for the additional commit, it looks good.

My reasoning behind 1) is about being coherent to your 1) reason for the feature. Dynamic DynamoDB has a lower and a higher threshold and should scale if the current provisioning is outside those boundaries, thus having an alarm for the higher case, but not for the lower seems confusing. Although I do agree that the use case for the high alarm is more valuable.

@Jofen

This comment has been minimized.

Contributor

Jofen commented May 22, 2014

Make sense, I will change to include both upper-alarm-threshold and lower-alarm-threshold. :)

@Jofen

This comment has been minimized.

Contributor

Jofen commented Jun 6, 2014

Hello,

Is this going to be merged and released soon?

Thanks,

Jiaofen

@sebdah

This comment has been minimized.

Owner

sebdah commented Jun 6, 2014

Sorry, I had missed that you had completed the last things from the code review. Will take a look at that within the next few days and merge. I just got my second child, so my time is a bit limited at the time.

@Jofen

This comment has been minimized.

Contributor

Jofen commented Jun 6, 2014

👍 No kidding! Congratulations!! :)

@sebdah sebdah modified the milestones: 1.16.x, Feature request pool Jun 8, 2014

sebdah added a commit that referenced this pull request Jun 8, 2014

sebdah added a commit that referenced this pull request Jun 8, 2014

@sebdah

This comment has been minimized.

Owner

sebdah commented Jun 8, 2014

This is now merged to the develop branch. I will do some more testing before making the release.

@sebdah sebdah merged commit a5ebcb8 into sebdah:master Jun 11, 2014

@sebdah

This comment has been minimized.

Owner

sebdah commented Jun 11, 2014

This has now been released in 1.16.0. Thank you very much for the pull request @Jofen!

Sign up for free to join this conversation on GitHub. Already have an account? Sign in to comment