Skip to content
New issue

Have a question about this project? Sign up for a free GitHub account to open an issue and contact its maintainers and the community.

By clicking “Sign up for GitHub”, you agree to our terms of service and privacy statement. We’ll occasionally send you account related emails.

Already on GitHub? Sign in to your account

Getting rate limit from Split API #106

Closed
bulju opened this issue Oct 17, 2022 · 5 comments · Fixed by #111
Closed

Getting rate limit from Split API #106

bulju opened this issue Oct 17, 2022 · 5 comments · Fixed by #111
Assignees

Comments

@bulju
Copy link
Contributor

bulju commented Oct 17, 2022

Hi there,

Terraform Version

Run terraform -v to show the version. If you are not running the latest version of Terraform, please upgrade because your issue may have already been fixed.

HerokuX Provider Version

Run terraform -v to show core and any provider versions. A sample output could be:

Terraform v1.3.2
+ provider.split v0.6.2

Affected Resource(s)

Please list the resources as a list, for example:

  • split_environment
  • split_user
  • we've tested on these 2 kind of resources, but we suspect that the same issue is affecting all resources

If this issue appears to affect multiple resources, it may be an issue with Terraform's core, so please mention this.

Terraform Configuration Files

resource "split_environment" "this" {
  for_each           = var.environments
  name                = each.value.name
  production       = each.value.production
  workspace_id  = each.value.workspace
}
environments = {
  1 = {
    name       = "Production"
    production = "false"
    workspace  = "REDACTED"
  },

  2 = {
    name       = "QA"
    production = "false"
    workspace  = "REDACTED"
  },

  3 = {
    name       = "Staging"
    production = "false"
    workspace  = "REDACTED"
  },

  4 = {
    name       = "Test"
    production = "false"
    workspace  = "REDACTED"
  },

  5 = {
    name       = "staging-mock"
    production = "false"
    workspace  = "REDACTED"
  },

  6 = {
    name       = "staging"
    production = "false"
    workspace  = "REDACTED"
  },

  7 = {
    name       = "Prod"
    production = "true"
    workspace  = "REDACTED"
  },

  8 = {
    name       = "Staging"
    production = "false"
    workspace  = "REDACTED"
  },

  9 = {
    name       = "Builds"
    production = "true"
    workspace  = "REDACTED"
  },

  10 = {
    name       = "Builds"
    production = "false"
    workspace  = "REDACTED"
  },

  11 = {
    name       = "Prod- Auto"
    production = "true"
    workspace  = "REDACTED"
  },

  12 = {
    name       = "Stg- Auto"
    production = "false"
    workspace  = "REDACTED"
  },

  13 = {
    name       = "Prod"
    production = "true"
    workspace  = "REDACTED"
  },

  14 = {
    name       = "Staging"
    production = "false"
    workspace  = "REDACTED"
  },

  15 = {
    name       = "Prod"
    production = "true"
    workspace  = "REDACTED"
  },

  16 = {
    name       = "Stg"
    production = "false"
    workspace  = "REDACTED"
  },

  17 = {
    name       = "Prod"
    production = "true"
    workspace  = "REDACTED"
  },

  18 = {
    name       = "Stg"
    production = "false"
    workspace  = "REDACTED"
  },

  19 = {
    name       = "appstore"
    production = "true"
    workspace  = "REDACTED"
  },

  20 = {
    name       = "dev"
    production = "false"
    workspace  = "REDACTED"
  },

  21 = {
    name       = "staging"
    production = "false"
    workspace  = "REDACTED"
  },
}


Debug Output

Please provider a link to a GitHub Gist containing the complete debug output: https://www.terraform.io/docs/internals/debugging.html. Please do NOT paste the debug output in the issue; just paste a link to the Gist. Please MAKE SURE to mask any sensitive values.

TBD

Panic Output

If Terraform produced a panic, please provide a link to a GitHub Gist containing the output of the crash.log.

Expected Behavior

What should have happened?

Split resources should be created

Actual Behavior

What actually happened?

We got a rate limit and we received a 429 from Split

Steps to Reproduce

Please list the steps required to reproduce the issue, for example:

Terraform apply with more than 20 resources (this number is not precise yet, as the provider doesn't provide actual numbers)

@bulju
Copy link
Contributor Author

bulju commented Oct 18, 2022

I'll work on this and raise a PR @davidji99

@davidji99
Copy link
Owner

davidji99 commented Oct 19, 2022

Hmm, this is interesting. 20 resources at the default parallelization of 10 for terraform seems quite low for a rate limit.

I'm curious, what happens if you set the -parallelism flag to say 5 (terraform apply -parallelism 5)?

In any event, let me raise this issue to my Split contacts.

@bulju
Copy link
Contributor Author

bulju commented Oct 19, 2022

I did try adding -parallelism=1 and the issue persists.

Based on their documentation, when you receive a 429 HTTP code (which is the rate limit), they return in their headers how much time you should wait to get a new quota. I guess we could add that to the client for doing a retry logic.

Here there is another provider which had the same issue: okta/terraform-provider-okta#186

Let me know what your contacts say about this issue.

I'm planning to work on this by the end of this week, to have a PR and then we can decide if we merge or not :)

@bulju
Copy link
Contributor Author

bulju commented Oct 26, 2022

Hi @davidji99, I just created a PR -> #111
Let me know if you like the philosophy that I took and if you believe that something is missing, I can rewrite it if it's needed!
As I don't have a sandbox environment I wasn't able to test it through the API with all the functionalities, I just ran a terraform plan for environments and now it's working.
But this only covers the "get" use case, not post, patch, etc.

@jramosf
Copy link
Contributor

jramosf commented Oct 26, 2022

The HTTP GET use case should be good enough since the rate limiting happens when the statefile is being refreshed in the plan phase.

Only in the case of updating/recreating a very large Split environment with the provider we may hit the limit if the request happen in parallel.

Sign up for free to join this conversation on GitHub. Already have an account? Sign in to comment
Labels
None yet
Projects
None yet
Development

Successfully merging a pull request may close this issue.

3 participants