Skip to content
New issue

Have a question about this project? Sign up for a free GitHub account to open an issue and contact its maintainers and the community.

By clicking “Sign up for GitHub”, you agree to our terms of service and privacy statement. We鈥檒l occasionally send you account related emails.

Already on GitHub? Sign in to your account

google_pubsub_subscription TF changes at every plan #18016

Open
apenen opened this issue May 3, 2024 · 10 comments
Open

google_pubsub_subscription TF changes at every plan #18016

apenen opened this issue May 3, 2024 · 10 comments

Comments

@apenen
Copy link

apenen commented May 3, 2024

Community Note

  • Please vote on this issue by adding a 馃憤 reaction to the original issue to help the community and maintainers prioritize this request.
  • Please do not leave +1 or me too comments, they generate extra noise for issue followers and do not help prioritize the request.
  • If you are interested in working on this issue or have submitted a pull request, please leave a comment.
  • If an issue is assigned to a user, that user is claiming responsibility for the issue.
  • Customers working with a Google Technical Account Manager or Customer Engineer can ask them to reach out internally to expedite investigation and resolution of this issue.

Terraform Version & Provider Version(s)

Terraform v1.5.7

Affected Resource(s)

google_pubsub_subscription

Terraform Configuration

resource "google_pubsub_subscription" "pull_subscriptions" {
  for_each = var.create_subscriptions ? { for i in var.pull_subscriptions : i.name => i } : {}

  name    = each.value.name
  topic   = var.create_topic ? google_pubsub_topic.topic[0].name : var.topic
  project = var.project_id
  labels  = var.subscription_labels
  enable_exactly_once_delivery = lookup(
    each.value,
    "enable_exactly_once_delivery",
    null,
  )
  ack_deadline_seconds = lookup(
    each.value,
    "ack_deadline_seconds",
    local.default_ack_deadline_seconds,
  )
  message_retention_duration = lookup(
    each.value,
    "message_retention_duration",
    null,
  )
  retain_acked_messages = lookup(
    each.value,
    "retain_acked_messages",
    null,
  )
  filter = lookup(
    each.value,
    "filter",
    null,
  )
  enable_message_ordering = lookup(
    each.value,
    "enable_message_ordering",
    null,
  )
  dynamic "expiration_policy" {
    // check if the 'expiration_policy' key exists, if yes, return a list containing it.
    for_each = contains(keys(each.value), "expiration_policy") ? [each.value.expiration_policy] : []
    content {
      ttl = expiration_policy.value
    }
  }

  dynamic "dead_letter_policy" {
    for_each = (lookup(each.value, "dead_letter_topic", "") != "") ? [each.value.dead_letter_topic] : []
    content {
      dead_letter_topic     = lookup(each.value, "dead_letter_topic", "")
      max_delivery_attempts = lookup(each.value, "max_delivery_attempts", "5")
    }
  }

  dynamic "retry_policy" {
    for_each = (lookup(each.value, "maximum_backoff", "") != "") ? [each.value.maximum_backoff] : []
    content {
      maximum_backoff = lookup(each.value, "maximum_backoff", "")
      minimum_backoff = lookup(each.value, "minimum_backoff", "")
    }
  }

  depends_on = [
    google_pubsub_topic.topic,
  ]
}

Debug Output

No response

Expected Behavior

The google_pubsub_subscription should be fully idempotent.
Using the resource after the first apply should return the: No changes. Your infrastructure matches the configuration. message

Actual Behavior

When not specifying these optional property: retry_policy, terraform wants to change their values in this way:

    # (10 unchanged attributes hidden)

  + retry_policy {}

    # (1 unchanged block hidden)

Steps to reproduce

Create a google_pubsub_subscription without specifying that property

  1. terraform apply once
  2. terraform plan as much as you want after that

Important Factoids

This change produces a change in the iam bindings since it has reference in replace_triggered_by

References

No response

b/341370938

@apenen apenen added the bug label May 3, 2024
@github-actions github-actions bot added forward/review In review; remove label to forward service/pubsub labels May 3, 2024
@ggtisc
Copy link
Collaborator

ggtisc commented May 10, 2024

Hi @apenen this scenario was replicated a considerable number of times with the same message:

No changes: Your infrastructure matches the configuration

Also as the documentation describes:

If you don't set the retry_policy this will be applied backstage, and as the message you are getting says is the retry_policy who is being provisioned. So according to the official documentation this is a normal harassment.

@ggtisc ggtisc self-assigned this May 10, 2024
@apenen
Copy link
Author

apenen commented May 10, 2024

The problem is given to us by setting the maximum_backoff to null. We will put the value to empty string by default in our vars. Thanks.

@melinath
Copy link
Collaborator

b/339860576

@melinath melinath removed the forward/review In review; remove label to forward label May 10, 2024
@melinath
Copy link
Collaborator

@ggtisc this looks like a classic permadiff, just want to double-check that you weren't able to reproduce it?

@melinath melinath added the forward/review In review; remove label to forward label May 10, 2024
@ggtisc
Copy link
Collaborator

ggtisc commented May 13, 2024

@ggtisc this looks like a classic permadiff, just want to double-check that you weren't able to reproduce it?

Being more precise, the provided code and configuration was applied successfully and the message in the console was: No changes: Your infrastructure matches the configuration

@apenen
Copy link
Author

apenen commented May 14, 2024

We are using the code from this module: https://github.com/terraform-google-modules/terraform-google-pubsub

When you set maximum_backoff to null, Terraform always applies empty changes, as I mentioned earlier.

@ggtisc
Copy link
Collaborator

ggtisc commented May 16, 2024

Reviewing in deep this kind of scenarios are detected as a permadiff as @melinath commented, but now if you are managing the value of the maximum_backoff with a variable is normal that you get updates each time the variable changes its value.

Just to confirm again: The scenario was replicated with all the configurations, code and versions provided even waiting for a long time to run a terraform plan and terraform apply. Also the value of the maximum_backoff was assigned to null

@melinath
Copy link
Collaborator

@trodge was able to reproduce this permadiff by using an empty retry_policy block. For example,

resource "google_pubsub_topic" "example" {
  name = "example-topic"
}

resource "google_pubsub_subscription" "example" {
  name  = "example-subscription"
  topic = google_pubsub_topic.example.id

  labels = {
    foo = "bar"
  }

  # 20 minutes
  message_retention_duration = "1200s"
  retain_acked_messages      = true

  ack_deadline_seconds = 20

  expiration_policy {
    ttl = "300000.5s"
  }
  retry_policy {
  }

  enable_message_ordering    = false
} 

The dynamic block is likely producing an empty entry for some reason.

@melinath melinath removed the forward/review In review; remove label to forward label May 17, 2024
@ggtisc
Copy link
Collaborator

ggtisc commented May 17, 2024

What do you think @melinath? I've reproduced this issue with this initial config for the retry_policy:

retry_policy {
    maximum_backoff = "15s"
    minimum_backoff = "10s"
}

Then executed a terraform plan and terraform apply with this change:

retry_policy {
    maximum_backoff = null
    minimum_backoff = "10s"
}

Finally as you suggested I executed new tests with this config:

retry_policy {}

But always having the same result as the previous replications... Did you get a different result to forward it?

@melinath
Copy link
Collaborator

melinath commented May 17, 2024

@ggtisc maximum_backoff and minimum_backoff are both default_from_api (aka Optional + Computed) which means that if unset in the config, they will continue to send the last value returned from the API. So since you set them the first time, they continued to be "set" for the subsequent tests. If you delete the retry_policy block, apply, and then add it back, you will be able to see the permadiff behavior.

Sign up for free to join this conversation on GitHub. Already have an account? Sign in to comment
Projects
None yet
Development

No branches or pull requests

4 participants