-
Notifications
You must be signed in to change notification settings - Fork 9.6k
New issue
Have a question about this project? Sign up for a free GitHub account to open an issue and contact its maintainers and the community.
By clicking “Sign up for GitHub”, you agree to our terms of service and privacy statement. We’ll occasionally send you account related emails.
Already on GitHub? Sign in to your account
import should use configured provider #17139
Comments
On a side note, knowing which provider is being used to manage a resource is very hard to debug in terraform - in my view the provider being used should be output as part of the plan, so that you can clearly see which provider alias is being used to manage the resource. It seems to be missing, and you are left to wade through huge reams of TF_LOG output to attempt to work out which of many aliased providers its using. When things go wrong (as seemingly above), it takes ages to even put a bug report together |
Hi @gtmtech, Yes, it can be confusing when there are a number of providers in play, but at least it's now deterministic which ones are used ;) I like the idea of somehow displaying which providers are being used by which resources. For now, once you've applied you can dig into the state file and see the provider field under each resource. An option for plan is to output the graph and trace the resource down to the provider it depends on. |
@jbardin I think its import that is broken - I see import and plan contacting different aws region endpoints for exactly the same config |
IIRC, import doesn't actually use the resource configuration at all. The import command has a I'll have to refresh my memory about the import internals, but now that providers are less magic we may be able to check the config to see if one is declared in the config and use that without specifying on the command line. |
It is an explicit option although says it defaults to the normal provider prefix of the resource (in this case it would appear it doesnt do that)
|
@gtmtech, that means it's going to look for an unaliased "aws" provider, which in your example has no region configuration (or it would assume one with no configuration anyway). Would the default region for your credentials match what you're seeing here? |
Right ok , let me try explicitly - yeah I thought that import would use the provider in the resource (especially since now import required the resource to exist in code before allowing an import of it - I thought this was why that functionality was added, so it could automatically detect which provider to use now!) |
Yes, you're right that it is required now, so that does seem counterintuitive. The requirement was put in place so that we know which provider plugins are required to begin with. I'll see if we can make use of an existing resource configuration to infer the correct provider. |
Thanks your workaround worked btw :) But yeah it would make sense to enhance the functionality of import as per this ticket |
In principle we could have We'd made the configuration interaction minimal here because in the long run we intend to extend the import process to generate configuration rather than require the user to hand-write it, but since that will probably come as part of a significant revamp of how the import functionality works (supporting batch import of many resources with dependencies, etc) I think it's okay to depend on the In that case, I suppose we could remove that |
I am having a problem that the My provider file: // provider.tf
provider "aws" {
region = "us-west-2"
alias = "us-west-2"
profile = "app-creds"
} The resource I'm trying to import: // database.tf
resource "aws_secretsmanager_secret" "relational_db_secret" {
provider = "aws.us-west-2"
name = "db/app-name/environment"
} Command:
Then Terraform responds in a weird way where it seems to do the import but also asks me for my region at the same time. The output below is what I see and then it hangs:
Not sure what to do for a workaround even. |
We ran into the same problem today. We had an unaliased provider and an aliased provider. The resource we wanted to import had the aliased provider configured. The import command tried to use the unaliased provider as explained by other users above. We would appreciate if this behaviour can be changed. |
I find this unexpected behaviour. A message saying "Terraform is using X provider but your resource is configured to use Z provider, use -provider option to specify a custom provider." would help to realise what's going on. |
This is definitely a frustrating issue, I'm getting failures while trying to import the Terraform:
Is there actually any way to perform this operation at the moment? Edit: In my case, I made this work by doing the following: |
i've been bumping up against this issue a repeatedly recently and managed to figure out (not sure if it's mentioned here or in one of the duplicate tickets) that if there are no providers defined in the modules and they're all in the top-most code, the import will work. |
The docs for I have had this problem using aliased providers e.g. Snowflake-Labs/terraform-provider-snowflake#149 The only workaround I found was to change the main provider to have the same properties as the aliased provider , run the import, then change my provider config back. |
As of v0.12.26, Terraform will use the provider configured for a resource for import of that resource only if that provider has no dependencies. Based on personal observations, Terraform appears to fall back to the default/unaliased provider if the configured one includes any references to other resources or data sources. It may, in fact, be the case that In my own use of Terraform to manage multiple AWS accounts, I connect the default There are two workarounds I've found:
|
Ran into this issue today. My default aws provider config has region eu-west-1, but I have a second one, aliased as “main” for region us-east-1 because Wafv2 ACL global (Scope=CLOUDFRONT) resources are only supported on us-east-1. My wafv2 acl resource has a provider section indicating it wants the aws.main provider, but Terraform import seems to give it the default one, because AWS returns an error that scope CLOUDFRONT does not exist (which is the case for regions other than us-east-1). Why is this flagged as “enhancement”? It seems pretty clearly to be a bug, i.e. Terraform doesn’t behave as it is documented to. UPDATE: So after applying @davehowell 's workaround, I still got the AWS error |
This issue seems to have slipped through the cracks when we fixed this in the recent releases. Closing as working in the current releases. |
I'm going to lock this issue because it has been closed for 30 days ⏳. This helps our maintainers find and focus on the active issues. If you have found a problem that seems similar to this, please open a new issue and complete the issue template so we can capture all the details necessary to investigate further. |
Terraform 0.11.1
The TL:DR of the below is that terraform does not appear to use providers and regions properly when passing providers through to child and grandchild modules - at least when using
terraform import
Background: am getting this error on terraform import:
Context:
As per documentation best practice, I am defining my providers at the top level of my project, and cascading them down into modules: Here is what I do in a nutshell:
Then in the git repository I have this:
Piecing all that together from end to beginning-
aws_s3_bucket (ha2):
- is in region var.bucket_ha_region
- supplied to the module as eu-central-1
aws_s3_bucket (ha2):
- uses provider aws.mha2
- which is aliased to the module from aws.ha2 in module scaffold1
- which is aliased to the module from aws.default_eu-central-1 in top level project
- which is defined in top level project with region eu-central-1
aws_s3_bucket_policy (ha2):
- has no explicit region, because the resource does not support it
- uses provider aws.mha2
- which via the same mechanism means it uses a provider with region eu-central-1.
So, as far as I can see, terraform should not be complaining about the fact it cannot import the policy because the bucket does not exist in eu-west-1. This is true, the bucket does exist, and it exists in eu-central-1. Terraform should know that, and be trying to read the policy using the corresponding provider in eu-central-1.
The text was updated successfully, but these errors were encountered: