Skip to content
New issue

Have a question about this project? Sign up for a free GitHub account to open an issue and contact its maintainers and the community.

By clicking “Sign up for GitHub”, you agree to our terms of service and privacy statement. We’ll occasionally send you account related emails.

Already on GitHub? Sign in to your account

docdb.ElasticCluster resource is not behaving as expected #4273

Open
notjosse opened this issue Jul 19, 2024 · 4 comments
Open

docdb.ElasticCluster resource is not behaving as expected #4273

notjosse opened this issue Jul 19, 2024 · 4 comments
Assignees
Labels
awaiting/bridge The issue cannot be resolved without action in pulumi-terraform-bridge. kind/bug Some behavior is incorrect or out of spec

Comments

@notjosse
Copy link

notjosse commented Jul 19, 2024

Describe what happened

The aws.docdb.ElasticCluster resource is presenting some unexpected/unwanted behavior. The issues are the following:

  1. The aws.docdb.ElasticCluster resource takes an unusually long time to create (around 20 minutes), in contrast, the aws.docdb.Cluster resource takes only about 80 seconds.
  2. During any pulumi update, pulumi tries to replace the existing aws.docdb.ElasticCluster resource, even when none of the input fields have changed. When looking at the documentation, there is no mention of any input field that would force replacement. This happens every time after every pulumi up.
  3. If you change the name of the existing aws.docdb.ElasticCluster resource in the code, pulumi doesn't replace the resource or change it's name, instead it tries to create a completely different resource as if the original didn't exist already.

Sample program

"""An AWS Python Pulumi program"""

import pulumi
import pulumi_aws as aws

default_config = pulumi.Config()
aws_config = pulumi.Config("aws")

elastic_cluster = aws.docdb.ElasticCluster(
"elastic-cluster",
admin_user_name="elasticadmin",
admin_user_password="password",
auth_type="PLAIN_TEXT",
shard_capacity=2,
shard_count=2,
)

docdb = aws.docdb.Cluster("docdb",
cluster_identifier="my-docdb-cluster",
engine="docdb",
master_username="foo",
master_password="mustbeeightchars",
backup_retention_period=5,
preferred_backup_window="07:00-09:00",
skip_final_snapshot=True)

pulumi.export("elastic_cluster_arn", elastic_cluster.id)
pulumi.export("elastic_cluster_endpoint", elastic_cluster.endpoint)
pulumi.export("elastic_cluster_id", elastic_cluster.id.apply(lambda arn: arn.split("/")[-1]))

Log output

No response

Affected Resource(s)

aws.docdb.ElasticCluster

Output of pulumi about

CLI
Version 3.124.0
Go Version go1.22.5
Go Compiler gc

Plugins
KIND NAME VERSION
resource aws 6.43.0
language python unknown
resource random 4.16.3

Host
OS darwin
Version 14.5
Arch arm64

Dependencies:
NAME VERSION
pip 24.1.1
pulumi_aws 6.43.0
pulumi_random 4.16.3
python-dotenv 1.0.1
setuptools 70.2.0
wheel 0.43.0

Pulumi locates its logs in /var/folders/6_/j5ng6ypd5_96pdf4b849tc6c0000gp/T/ by default

Additional context

No response

Contributing

Vote on this issue by adding a 👍 reaction.
To contribute a fix for this issue, leave a comment (and link to your pull request, if you've opened one already).

@notjosse notjosse added kind/bug Some behavior is incorrect or out of spec needs-triage Needs attention from the triage team labels Jul 19, 2024
@corymhall corymhall self-assigned this Jul 22, 2024
@corymhall
Copy link
Contributor

@notjosse

The aws.docdb.ElasticCluster resource takes an unusually long time to create (around 20 minutes), in contrast, the aws.docdb.Cluster resource takes only about 80 seconds.

It looks like this is just how long it takes to provision this resource. I also tried provisioning one in the AWS console manually and it took just as long.

During any pulumi update, pulumi tries to replace the existing aws.docdb.ElasticCluster resource, even when none of the input fields have changed. When looking at the documentation, there is no mention of any input field that would force replacement. This happens every time after every pulumi up.

This is the diff that I see (curious if you see anything differently).

 [urn=urn:pulumi:dev::pulumi-typescript-app::pulumi:pulumi:Stack::pulumi-typescript-app-dev]
    +-aws:docdb/elasticCluster:ElasticCluster: (replace)
        [id=arn:aws:docdb-elastic:us-east-2:12345678910:cluster/1a9e5cef-24a8-4cb6-81cd-58bf7a4af373]
        [urn=urn:pulumi:dev::pulumi-typescript-app::aws:docdb/elasticCluster:ElasticCluster::chall-cluster]
        [provider=urn:pulumi:dev::pulumi-typescript-app::pulumi:providers:aws::default_6_44_0::e8549a8a-5758-4aeb-baaf-a12fd2e2604d]
        adminUserName             : "chall"
        adminUserPassword         : [secret]
      ~ arn                       : "arn:aws:docdb-elastic:us-east-2:12345678910:cluster/1a9e5cef-24a8-4cb6-81cd-58bf7a4af373" => output<string>
        authType                  : "PLAIN_TEXT"
      ~ endpoint                  : "chall-cluster-8ed8f01-12345678910.us-east-2.docdb-elastic.amazonaws.com" => output<string>
      ~ id                        : "arn:aws:docdb-elastic:us-east-2:616138583583:cluster/1a9e5cef-24a8-4cb6-81cd-58bf7a4af373" => output<string>
      ~ kmsKeyId                  : "AWS_OWNED_KMS_KEY" => output<string>
      ~ name                      : "chall-cluster-8ed8f01" => "chall-cluster-f8c21fb"
      ~ preferredMaintenanceWindow: "Sun:04:05-Sun:04:35" => output<string>
        shardCapacity             : 2
        shardCount                : 2
        subnetIds                 : [
            [0]: "subnet-09f1542f52e34258d"
            [1]: "subnet-0f1175830383f6edb"
        ]
      - tagsAll                   : {}
      - vpcSecurityGroupIds       : [
      -     [0]: "sg-0e7b99ad3860e94db"
        ]
      + vpcSecurityGroupIds       : output<string>
Resources:
    +-1 to replace
    4 unchanged

Looking at the diff grpc logs it looks like the replace is due to the kmsKeyId

{
  "method": "/pulumirpc.ResourceProvider/Diff",
  "request": {
    "id": "arn:aws:docdb-elastic:us-east-2:123456789123:cluster/1a9e5cef-24a8-4cb6-81cd-58bf7a4af373",
    "urn": "urn:pulumi:dev::pulumi-typescript-app::aws:docdb/elasticCluster:ElasticCluster::chall-cluster",
    "olds": {
      "adminUserName": "chall",
      "adminUserPassword": "password",
      "arn": "arn:aws:docdb-elastic:us-east-2:123456789123:cluster/1a9e5cef-24a8-4cb6-81cd-58bf7a4af373",
      "authType": "PLAIN_TEXT",
      "endpoint": "chall-cluster-8ed8f01-123456789123.us-east-2.docdb-elastic.amazonaws.com",
      "id": "arn:aws:docdb-elastic:us-east-2:123456789123:cluster/1a9e5cef-24a8-4cb6-81cd-58bf7a4af373",
      "kmsKeyId": "AWS_OWNED_KMS_KEY",
      "name": "chall-cluster-8ed8f01",
      "preferredMaintenanceWindow": "Sun:04:05-Sun:04:35",
      "shardCapacity": 2,
      "shardCount": 2,
      "subnetIds": [
        "subnet-09f1542f52e34258d",
        "subnet-0f1175830383f6edb"
      ],
      "tagsAll": {},
      "vpcSecurityGroupIds": [
        "sg-0e7b99ad3860e94db"
      ]
    },
    "news": {
      "adminUserName": "chall",
      "adminUserPassword": "password",
      "authType": "PLAIN_TEXT",
      "name": "chall-cluster-8ed8f01",
      "shardCapacity": 2,
      "shardCount": 2,
      "subnetIds": [
        "subnet-09f1542f52e34258d",
        "subnet-0f1175830383f6edb"
      ]
    },
    "oldInputs": {
      "adminUserName": "chall",
      "adminUserPassword": "password",
      "authType": "PLAIN_TEXT",
      "name": "chall-cluster-8ed8f01",
      "shardCapacity": 2,
      "shardCount": 2,
      "subnetIds": [
        "subnet-09f1542f52e34258d",
        "subnet-0f1175830383f6edb"
      ]
    }
  },
  "response": {
    "replaces": [
      "kmsKeyId"
    ],
    "changes": "DIFF_SOME",
    "diffs": [
      "tagsAll"
    ]
  },
  "metadata": {
    "kind": "resource",
    "mode": "client",
    "name": "aws"
  }
}

My hunch is that this one is caused by pulumi/pulumi-terraform-bridge#2171 because the kmsKeyId has a PlanModifier (specifically UseStateForUnknown
https://github.com/hashicorp/terraform-provider-aws/blob/ecb8ef62a96af9a86c8a24fa53c37ac4b4d623b1/internal/service/docdbelastic/cluster.go#L91-L98

If you change the name of the existing aws.docdb.ElasticCluster resource in the code, pulumi doesn't replace the resource or change it's name, instead it tries to create a completely different resource as if the original didn't exist already.

I think this is expected behavior. The name (first argument) is how pulumi identifies the cluster. If you change this pulumi thinks that the old one disappeared and the new one appeared and doesn't know they are related. You can use the alias resource option to tell pulumi that it should map to the old resource.

@corymhall corymhall removed their assignment Jul 22, 2024
@corymhall corymhall added awaiting/bridge The issue cannot be resolved without action in pulumi-terraform-bridge. and removed needs-triage Needs attention from the triage team labels Jul 22, 2024
@nevace
Copy link

nevace commented Jul 23, 2024

I have the same issue and wondered if there's a workaround for now. I was going to use aws.docdb.ElasticCluster.get(clusterName, clusterID); to check if the resource exists and if so, don't create it again (not sure if this would work). But it seems like there could be a bug in this method. The second argument is ID according to the docs but it looks like AWS is expecting the ARN as I get the error "Invalid arn provided".

I'm assuming it's using this which requires an ARN: https://docs.aws.amazon.com/documentdb/latest/developerguide/API_elastic_GetCluster.html

@corymhall
Copy link
Contributor

@notjosse I just tried reproducing #2 again using the latest pulumi-terraform-bridge@master and it looks like it has been fixed. This means that the next bridge release should fix the issue. We'll comment on this ticket once it has been released.

@corymhall corymhall self-assigned this Jul 24, 2024
@notjosse
Copy link
Author

@corymhall Thanks so much for the support and for tackling this so quickly!

Sign up for free to join this conversation on GitHub. Already have an account? Sign in to comment
Labels
awaiting/bridge The issue cannot be resolved without action in pulumi-terraform-bridge. kind/bug Some behavior is incorrect or out of spec
Projects
None yet
Development

No branches or pull requests

3 participants