Skip to content
New issue

Have a question about this project? Sign up for a free GitHub account to open an issue and contact its maintainers and the community.

By clicking “Sign up for GitHub”, you agree to our terms of service and privacy statement. We’ll occasionally send you account related emails.

Already on GitHub? Sign in to your account

import should use configured provider #17139

Closed
gtmtech opened this issue Jan 18, 2018 · 20 comments
Closed

import should use configured provider #17139

gtmtech opened this issue Jan 18, 2018 · 20 comments

Comments

@gtmtech
Copy link

gtmtech commented Jan 18, 2018

Terraform 0.11.1

The TL:DR of the below is that terraform does not appear to use providers and regions properly when passing providers through to child and grandchild modules - at least when using terraform import

Background: am getting this error on terraform import:

* module.scaffold1.module.bucketa.aws_s3_bucket.ha2 (import id: xxx.testbucket): import module.scaffold1.bucketa.aws_s3_bucket.ha2 (id: xxx.testbucket): Error importing AWS S3 bucket policy: BucketRegionError: incorrect region, the bucket is not in 'eu-west-1' region
    status code: 301, request id: , host id:

Context:

As per documentation best practice, I am defining my providers at the top level of my project, and cascading them down into modules: Here is what I do in a nutshell:

resources/provider_aws.default.eu-central-1.tf

provider "aws" {
    version = "1.6.0"
    alias = "default_eu-central-1"
    region = "eu-central-1"
    assume_role {
        role_arn = "arn:aws:iam::xxxxxxxxxxxx:role/my_terraform_role"
    }
}

resources/module_scaffold1.tf

module "scaffold1" {
    source = "../modules/scaffold1"
    providers = {
        "aws" = "aws.default_eu-west-1"
        "aws.ha1" = "aws.default_eu-west-1"
        "aws.ha2" = "aws.default_eu-central-1"
    }
}

modules/scaffold1/providers.tf

provider "aws" {
}

provider "aws" {
    alias = "ha1"
}

provider "aws" {
    alias = "ha2"
}

modules/scaffold1/bucketa.tf

module "bucketa" {
    source = "git::ssh://git@bitbucket.org/..../...."
    bucket_ha_region = "eu-central-1"
    
    providers = {
        "aws.mha1" = "aws.ha1"
        "aws.mha2" = "aws.ha2"
    }
}

Then in the git repository I have this:

./providers.tf

provider "aws" {
    alias = "mha1"
}

provider "aws" {
    alias = "mha2"
}

./ha2.aws_s3_bucket.tf

resource "aws_s3_bucket" "ha2" {
    bucket = "xxx.testbucket"
    provider = "aws.mha2"
    region = "${var.bucket_ha_region}"
}

./ha2.aws_s3_bucket_policy.tf

resource "aws_s3_bucket_policy" "ha2" {
    bucket = "${aws_s3_bucket.ha2.id}"
    provider = "aws.mha2"
    policy = "${data.template_file.ha2.rendered}"
}

./variables.tf

variable "bucket_ha_region"      {}

Piecing all that together from end to beginning-

aws_s3_bucket (ha2):
- is in region var.bucket_ha_region
- supplied to the module as eu-central-1
aws_s3_bucket (ha2):
- uses provider aws.mha2
- which is aliased to the module from aws.ha2 in module scaffold1
- which is aliased to the module from aws.default_eu-central-1 in top level project
- which is defined in top level project with region eu-central-1

aws_s3_bucket_policy (ha2):
- has no explicit region, because the resource does not support it
- uses provider aws.mha2
- which via the same mechanism means it uses a provider with region eu-central-1.

So, as far as I can see, terraform should not be complaining about the fact it cannot import the policy because the bucket does not exist in eu-west-1. This is true, the bucket does exist, and it exists in eu-central-1. Terraform should know that, and be trying to read the policy using the corresponding provider in eu-central-1.

@gtmtech
Copy link
Author

gtmtech commented Jan 18, 2018

On a side note, knowing which provider is being used to manage a resource is very hard to debug in terraform - in my view the provider being used should be output as part of the plan, so that you can clearly see which provider alias is being used to manage the resource. It seems to be missing, and you are left to wade through huge reams of TF_LOG output to attempt to work out which of many aliased providers its using. When things go wrong (as seemingly above), it takes ages to even put a bug report together

@jbardin
Copy link
Member

jbardin commented Jan 18, 2018

Hi @gtmtech,

Yes, it can be confusing when there are a number of providers in play, but at least it's now deterministic which ones are used ;) I like the idea of somehow displaying which providers are being used by which resources. For now, once you've applied you can dig into the state file and see the provider field under each resource.

An option for plan is to output the graph and trace the resource down to the provider it depends on.

@gtmtech
Copy link
Author

gtmtech commented Jan 18, 2018

@jbardin I think its import that is broken - I see import and plan contacting different aws region endpoints for exactly the same config

@jbardin
Copy link
Member

jbardin commented Jan 18, 2018

IIRC, import doesn't actually use the resource configuration at all. The import command has a -provider option so that you can specify which provider to use exactly.

I'll have to refresh my memory about the import internals, but now that providers are less magic we may be able to check the config to see if one is declared in the config and use that without specifying on the command line.

@gtmtech
Copy link
Author

gtmtech commented Jan 18, 2018

It is an explicit option although says it defaults to the normal provider prefix of the resource (in this case it would appear it doesnt do that)

  -provider=provider      Specific provider to use for import. This is used for
                          specifying aliases, such as "aws.eu". Defaults to the
                          normal provider prefix of the resource being imported.

@jbardin
Copy link
Member

jbardin commented Jan 18, 2018

@gtmtech, that means it's going to look for an unaliased "aws" provider, which in your example has no region configuration (or it would assume one with no configuration anyway). Would the default region for your credentials match what you're seeing here?

@gtmtech
Copy link
Author

gtmtech commented Jan 18, 2018

Right ok , let me try explicitly - yeah I thought that import would use the provider in the resource (especially since now import required the resource to exist in code before allowing an import of it - I thought this was why that functionality was added, so it could automatically detect which provider to use now!)

@jbardin
Copy link
Member

jbardin commented Jan 18, 2018

Yes, you're right that it is required now, so that does seem counterintuitive. The requirement was put in place so that we know which provider plugins are required to begin with. I'll see if we can make use of an existing resource configuration to infer the correct provider.

@jbardin jbardin changed the title Cascading providers down to modules ends up with wrong behaviour import should use configured provider Jan 18, 2018
@gtmtech
Copy link
Author

gtmtech commented Jan 18, 2018

Thanks your workaround worked btw :) But yeah it would make sense to enhance the functionality of import as per this ticket

@apparentlymart
Copy link
Contributor

apparentlymart commented Jan 18, 2018

In principle we could have terraform inport look for the provider attribute in configuration and use it, since indeed we are currently requiring the resource block to exist before import anyway both to allow us to detect the need for the provider (as @jbardin mentioned) and as a way to catch typos in the resource address given on the command line.

We'd made the configuration interaction minimal here because in the long run we intend to extend the import process to generate configuration rather than require the user to hand-write it, but since that will probably come as part of a significant revamp of how the import functionality works (supporting batch import of many resources with dependencies, etc) I think it's okay to depend on the provider argument for now and expect that we'll find a different way to solve this once we get to the import improvements work.

In that case, I suppose we could remove that -provider command line option altogether, since it'll become redundant. (Though that would require us to wait on this until a major release, so we may wish to just deprecate it first so that this could be addressed sooner, if time allows.)

@arsdehnel
Copy link
Contributor

I am having a problem that the provider input doesn't appear to actually be used in my import command.

My provider file:

// provider.tf
provider "aws" {
  region  = "us-west-2"
  alias   = "us-west-2"
  profile = "app-creds"
}

The resource I'm trying to import:

// database.tf
resource "aws_secretsmanager_secret" "relational_db_secret" {
  provider = "aws.us-west-2"
  name     = "db/app-name/environment"
}

Command:

terraform import -provider=aws.us-west-2 aws_secretsmanager_secret.relational_db_secret <secret-arn>

Then Terraform responds in a weird way where it seems to do the import but also asks me for my region at the same time. The output below is what I see and then it hangs:

provider.aws.region
  The region where AWS operations will take place. Examples
  are us-east-1, us-west-2, etc.

  Default: us-east-1
  Enter a value: aws_secretsmanager_secret.relational_db_secret: Importing from ID "arn:aws:secretsmanager:us-west-2:1234567890:secret:db/app-name/env-471Vwu"...
aws_secretsmanager_secret.relational_db_secret: Import complete!
  Imported aws_secretsmanager_secret (ID: arn:aws:secretsmanager:us-west-2:1234567890:secret:db/app-name/environment-471Vwu)
aws_secretsmanager_secret.relational_db_secret: Refreshing state... (ID: arn:aws:secretsmanager:us-west-2:1213456...cret:db/app-name/environment-471Vwu)

Not sure what to do for a workaround even.

@rikostave1234
Copy link

We ran into the same problem today. We had an unaliased provider and an aliased provider. The resource we wanted to import had the aliased provider configured. The import command tried to use the unaliased provider as explained by other users above.

We would appreciate if this behaviour can be changed.

@jurajseffer
Copy link

I find this unexpected behaviour. A message saying "Terraform is using X provider but your resource is configured to use Z provider, use -provider option to specify a custom provider." would help to realise what's going on.

@phyber
Copy link

phyber commented Jan 11, 2020

This is definitely a frustrating issue, I'm getting failures while trying to import the OrganizationAccountAccessRole for a child account into my state.

Terraform:

  • Ignores the provider in the resource
  • Ignores the -provider argument
  • Ignores both of them being supplied at the same time

Is there actually any way to perform this operation at the moment?

Edit: In my case, I made this work by doing the following:
My provider block was referring to a data.aws_arn.role_name.arn to get the role assumption ARN. I replaced the reference with the plaintext role ARN: arn:aws:iam::<account_id>:role/OrganizationAccountAccessRole and the import sprung to life.

@mrtristan
Copy link

i've been bumping up against this issue a repeatedly recently and managed to figure out (not sure if it's mentioned here or in one of the duplicate tickets) that if there are no providers defined in the modules and they're all in the top-most code, the import will work.

@davehowell
Copy link

The docs for terraform import says that the -provider option is Deprecated, does that mean this has been fixed as mentioned above?

I have had this problem using aliased providers e.g. Snowflake-Labs/terraform-provider-snowflake#149

The only workaround I found was to change the main provider to have the same properties as the aliased provider , run the import, then change my provider config back.

@greghensley
Copy link

As of v0.12.26, Terraform will use the provider configured for a resource for import of that resource only if that provider has no dependencies. Based on personal observations, Terraform appears to fall back to the default/unaliased provider if the configured one includes any references to other resources or data sources. It may, in fact, be the case that terraform import is simply resolving all references inside providers to null or the empty string -- the impact is the same.

In my own use of Terraform to manage multiple AWS accounts, I connect the default aws provider to the AWS root account of my org which is used to create child accounts. An aliased provider is defined for each child account based on the outputs of that account's resource. Only when importing resources, Terraform will throw errors suggesting that it was attempting to use the default, un-aliased aws provider for the import even though the target resource was explicitly defined to use a different provider. Terraform will also complain about other providers (which are not involved in the import) for having "invalid" values for required fields -- specifically, only those (required) fields which depend on the outputs of other resources.

There are two workarounds I've found:

  1. As @davehowell observed previously, the default provider can be temporarily reconfigured to match the account of the resource being imported.
  2. Alternatively, you can define all providers using only immediate/literal values rather than outputs of other resources. This allows terraform import to behave as expected for all resources, but requires duplication of information, makes refactoring more burdensome that it should be, and (in my case described above) completely prevents re-use of the Terraform config to replicate the environment with different provider details (account IDs, etc).

@fransflippo
Copy link

fransflippo commented Dec 8, 2020

Ran into this issue today. My default aws provider config has region eu-west-1, but I have a second one, aliased as “main” for region us-east-1 because Wafv2 ACL global (Scope=CLOUDFRONT) resources are only supported on us-east-1. My wafv2 acl resource has a provider section indicating it wants the aws.main provider, but Terraform import seems to give it the default one, because AWS returns an error that scope CLOUDFRONT does not exist (which is the case for regions other than us-east-1).

Why is this flagged as “enhancement”? It seems pretty clearly to be a bug, i.e. Terraform doesn’t behave as it is documented to.

UPDATE: So after applying @davehowell 's workaround, I still got the AWS error "The scope is not valid., field: SCOPE_VALUE, parameter: CLOUDFRONT".
Turns out I had added provider = aws.main to a different resource than the one I was importing. 🤦
Added it to the right one, and even without the workaround, it works.
So seems to be fixed on Terraform v0.14.0.

@jbardin
Copy link
Member

jbardin commented Mar 31, 2021

This issue seems to have slipped through the cracks when we fixed this in the recent releases. Closing as working in the current releases.

@jbardin jbardin closed this as completed Mar 31, 2021
@ghost
Copy link

ghost commented May 1, 2021

I'm going to lock this issue because it has been closed for 30 days ⏳. This helps our maintainers find and focus on the active issues.

If you have found a problem that seems similar to this, please open a new issue and complete the issue template so we can capture all the details necessary to investigate further.

@ghost ghost locked as resolved and limited conversation to collaborators May 1, 2021
Sign up for free to subscribe to this conversation on GitHub. Already have an account? Sign in.
Projects
None yet
Development

No branches or pull requests