You signed in with another tab or window. Reload to refresh your session.You signed out in another tab or window. Reload to refresh your session.You switched accounts on another tab or window. Reload to refresh your session.Dismiss alert
Description:
When trying to create a group of sentry_issue_alerts resources, we're hitting an error whereby Slack is rate limiting the number of calls we can make to it's API. As a result, any apply jobs that run and create quite a few alerts will fail, also corrupting the terraform state.
Example
We have ~281 alerts to be created, most of which look very similar to the following definition:
"Sentry to Slack - channel - event seen - every 30 minutes" = {
frequency =30
environment =null
actions = [{
id ="sentry.integrations.slack.notify_action.SlackNotifyServiceAction",
channel ="#sentry-my-team",
workspace = local.my_team_workspace_id,
tags ="environment,level,user"
}]
conditions = [{
id ="sentry.rules.conditions.first_seen_event.FirstSeenEventCondition"
}]
filters = [local.filter_not_devops]
},
})
It would be good if the terraform provider had some way of understanding this rate limit and backing off when errors start to occur. As a result of applying the above terraform, we recieve the following error:
│ Error: PUT https://app.getsentry.com/api/0/projects/my-project/my-app/rules/1234567/: 400 map[actions:[Slack: Requests to Slack exceeded the rate limit. Please try again later.]]
│
│ with module.my_app.sentry_issue_alert.alert_rule["Sentry to Slack - channel - event seen - every 30 minutes"],
│ on ../module/main.tf line 29, in resource "sentry_issue_alert" "alert_rule":
│ 29: resource "sentry_issue_alert" "alert_rule" {
The text was updated successfully, but these errors were encountered:
Description:
When trying to create a group of
sentry_issue_alerts
resources, we're hitting an error whereby Slack is rate limiting the number of calls we can make to it's API. As a result, anyapply
jobs that run and create quite a few alerts will fail, also corrupting the terraform state.Example
We have ~281 alerts to be created, most of which look very similar to the following definition:
An example alert rule:
It would be good if the terraform provider had some way of understanding this rate limit and backing off when errors start to occur. As a result of applying the above terraform, we recieve the following error:
The text was updated successfully, but these errors were encountered: