-
Notifications
You must be signed in to change notification settings - Fork 9.5k
New issue
Have a question about this project? Sign up for a free GitHub account to open an issue and contact its maintainers and the community.
By clicking “Sign up for GitHub”, you agree to our terms of service and privacy statement. We’ll occasionally send you account related emails.
Already on GitHub? Sign in to your account
Remote state not created when using modules. #15848
Comments
Hi @HebertCL! Sorry this isn't working properly. Thanks for the detailed reproduction steps. Agreed that this is some odd behavior. Hopefully we can duplicate your result here and figure out what's going awry. |
@HebertCL Reading over your (very detailed) description, it is not clear to me if you ran Just wanted to be clear, the terraform remote state data provider is a read-only mechanism, and does not cause terraform to apply the remote configuration. I've seen this as a common misunderstanding in the community. It's also worth noting that the remote state data provider reads only outputs of the root module in the remote resources. So if you have a module and you want to access one of it's outputs using the data provider, you'll need to add another output block to the root module that references the output of the submodule. See root outputs only from the provider docs. |
@nbering I did tried applying only the network module prior applying the other two. It applied correctly but it didn't generated a terraform.tfstate file under my remote_state directory. The plan output I pasted above was just to show the errors I get when I try to plan or apply even if the network module was previously created. Thanks for the outputs root outputs reference, this is something new for me that I need to take a look at. |
I'm running into the same thing with the latest terraform. My code looks like this data "terraform_remote_state" "core" {
backend = "s3"
config {
bucket = "m-devops-terraform"
key = "core-common/terraform.tfstate"
region = "us-east-1"
}
}
module "metadata" {
source = "../modules/metadata/"
tag_environment = "${data.terraform_remote_state.core.tag_environment}"
tag_maintainer = "${data.terraform_remote_state.core.tag_maintainer}"
tag_application = "${data.terraform_remote_state.core.tag_application}"
tag_run_always = "true"
tag_department = "${data.terraform_remote_state.core.tag_department}"
tag_project_name = "${data.terraform_remote_state.core.tag_project_name}"
}
module "bastion" {
source = "../modules/bastion"
region = "${data.terraform_remote_state.core.region}"
security_groups = "${data.terraform_remote_state.core.external_ssh},${data.terraform_remote_state.core.internal_ssh}"
vpc_id = "${data.terraform_remote_state.core.vpc_id}"
key_name = "${data.terraform_remote_state.core.key_name}"
subnet_id = "${data.terraform_remote_state.core.external_subnets.0.id}"
tag_environment = "${module.metadata.tag_environment}"
tag_maintainer = "${module.metadata.tag_maintainer}"
tag_application = "${module.metadata.tag_application}"
tag_run_always = "true"
tag_department = "${module.metadata.tag_department}"
tag_project_name = "${module.metadata.tag_project_name}"
name = "${module.metadata.tag_environment}-ssh"
}
When I look at the state for my core in s3, the outputs are all there. {
"version": 3,
"terraform_version": "0.10.2",
"serial": 1,
"lineage": "******************",
"modules": [
{
"path": [
"root"
],
"outputs": {
"availability_zones": {
"sensitive": false,
"type": "list",
"value": [
"us-east-2a",
"us-east-2b",
"us-east-2c"
]
},
"domain_name": {
"sensitive": false,
"type": "string",
"value": "core-net.local"
},
"environment": {
"sensitive": false,
"type": "string",
"value": "staging"
},
"external_elb": {
"sensitive": false,
"type": "string",
"value": "sg-******************"
},
"external_route_tables": {
"sensitive": false,
"type": "string",
"value": "rtb-******************"
},
"external_ssh": {
"sensitive": false,
"type": "string",
"value": "sg-******************"
},
"external_subnets": {
"sensitive": false,
"type": "list",
"value": [
"subnet-******************",
"subnet-******************",
"subnet-******************"
]
},
"internal_elb": {
"sensitive": false,
"type": "string",
"value": "sg-******************"
},
"internal_route_tables": {
"sensitive": false,
"type": "string",
"value": "rtb-******************,rtb-******************,rtb-******************"
},
"internal_ssh": {
"sensitive": false,
"type": "string",
"value": "sg-******************"
},
"internal_subnets": {
"sensitive": false,
"type": "list",
"value": [
"subnet-******************",
"subnet-******************",
"subnet-******************"
]
},
"key_name": {
"sensitive": false,
"type": "string",
"value": "devo******************"
},
"log_bucket_id": {
"sensitive": false,
"type": "string",
"value": "******************-core-network-test-staging-logs"
},
"region": {
"sensitive": false,
"type": "string",
"value": "us-east-2"
},
"tag_application": {
"sensitive": false,
"type": "string",
"value": "core-network"
},
"tag_department": {
"sensitive": false,
"type": "string",
"value": "operations"
},
"tag_environment": {
"sensitive": false,
"type": "string",
"value": "staging"
},
"tag_maintainer": {
"sensitive": false,
"type": "string",
"value": "devops@******************.com"
},
"tag_project_name": {
"sensitive": false,
"type": "string",
"value": "d******************-core"
},
"vpc_id": {
"sensitive": false,
"type": "string",
"value": "vpc-******************"
},
"vpc_security_group": {
"sensitive": false,
"type": "string",
"value": "sg-******************"
},
"zone_id": {
"sensitive": false,
"type": "string",
"value": "Z3******************"
}
},
"resources": {
"data.aws_caller_identity.current": {
"type": "aws_caller_identity",
"depends_on": [],
"primary": {
"id": "2017-08-18 20:54:45.466849791 +0000 UTC",
"attributes": {
"account_id": "******************",
"arn": "arn:aws:iam::******************3:user/******************.com",
"id": "2017-08-18 20:54:45.466849791 +0000 UTC",
"user_id": "AIDAILJPLKR24UNC7CDIK"
},
"meta": {},
"tainted": false
},
"deposed": [],
"provider": ""
}
},
"depends_on": []
},
{
"path": [
"root",
"defaults"
],
"outputs": {
"domain_name_servers": {
"sensitive": false,
"type": "string",
"value": "10.30.0.2"
}
},
"resources": {},
"depends_on": []
}, Some reason when calling them from the remote state it can't find them. My output for the "core" remote data looks like below.What am I doing in correctly? // The region in which the infra lives.
output "region" {
value = "${var.region}"
}
// The internal route53 zone ID.
output "zone_id" {
value = "${module.dns.zone_id}"
}
// Security group for internal ELBs.
output "internal_elb" {
value = "${module.security_groups.internal_elb}"
}
// Security group for external ELBs.
output "external_elb" {
value = "${module.security_groups.external_elb}"
}
// Security group for internal ELBs.
output "internal_ssh" {
value = "${module.security_groups.internal_ssh}"
}
// Security group for external ELBs.
output "external_ssh" {
value = "${module.security_groups.external_ssh}"
}
// Comma separated list of internal subnet IDs.
output "internal_subnets" {
value = "${module.vpc.internal_subnets}"
}
// Comma separated list of external subnet IDs.
output "external_subnets" {
value = "${module.vpc.external_subnets}"
}
// S3 bucket ID for ELB logs.
output "log_bucket_id" {
value = "${module.s3_logs.id}"
}
// The internal domain name, e.g "stack.local".
output "domain_name" {
value = "${module.dns.name}"
}
// The environment of the stack, e.g "prod".
output "environment" {
value = "${var.environment}"
}
// The VPC availability zones.
output "availability_zones" {
value = "${module.vpc.availability_zones}"
}
// The VPC security group ID.
output "vpc_security_group" {
value = "${module.vpc.security_group}"
}
// The VPC ID.
output "vpc_id" {
value = "${module.vpc.id}"
}
// Comma separated list of internal route table IDs.
output "internal_route_tables" {
value = "${module.vpc.internal_rtb_id}"
}
// The external route table ID.
output "external_route_tables" {
value = "${module.vpc.external_rtb_id}"
}
output "tag_environment" {
value = "${module.metadata.tag_environment}"
}
output "tag_department" {
value = "${module.metadata.tag_department}"
}
output "tag_application" {
value = "${module.metadata.tag_application}"
}
output "tag_maintainer" {
value = "${module.metadata.tag_maintainer}"
}
output "tag_project_name" {
value = "${module.metadata.tag_project_name}"
}
output "key_name" {
value = "${var.key_name}"
} |
Would be nice to get a fix for this. Just hit into this myself :( |
Hi all, So far we're not entirely sure what is going on here. For those of you saying you've hit a similar problem, it would be helpful to see examples of what happens when running Terraform, ideally with the environment variable |
Hi all, I reproduced the issue with terraform 0.10.7, 0.10.8 and 0.11.0-rc1. I have declared 3 different remote states stored in azure blob storage:
With a man in the middle I get:
Terraform do not take remote state key property into account. It always use first remote states key. You can find a gist containing traces and sample tf file: https://gist.github.com/romlinch/ff0754f4b2691c627ba53145ab895725 Hope it will help ! |
It seems to work when we use azurerm backend instead of azure bakend in data "terraform_remote_state"... |
Hi, I have a similar problem, I want to split a big state file into various. The first part, "network" is ok, but when I tried to to apply my "security" session I have this error
@apparentlymart below is the TF_LOG https://gist.github.com/orelhinhas/537b051c0269d0b02c2eb0e45df29300 thanks! |
Thanks for the extra details, everyone. I've still not managed to quite get my head around what's going on here but we'll continue looking at it and try to figure it out. |
Any updates on this or ideas w/r/t workarounds? I've got all my infrastructure running using the approved AWS VPC TF registry module backed by S3 and it would be great to leverage |
I just ran into this yesterday as well on version 0.11.2. |
Is anyone else using workspaces? It looks like the I had this and it didnt work:
And changed it to this and now it works:
@apparentlymart can you confirm that that is the expected behavior for workspaces? That wasnt documented anywhere that I could see (but also new to TF and may have missed it). I would expect the I can open a seperate issue for that if need be, as I do not know if other having the remote state issue were using workspaces or not. |
@kylegoch have you see the docs at: https://www.terraform.io/docs/providers/terraform/d/remote_state.html? Specifically, the 'environment' option. Workspaces now being what environments were (c.f. https://www.terraform.io/docs/state/workspaces.html for the renaming of environments -> workspaces). Hope this helps. |
@serenitus I just realized I never came back up there to update. So it turns out those docs are wrong (remote_state), and Not sure if that helps solved the original issue. |
@kylegoch |
@tehmaspc Correct. I had this same issue that started this issue. However, I was using However, it is very possible that other people in this Issue may be experiencing troubles and not using I just wanted to provide that I did fix my issue with a the remote state not being created, and hopefully it will help some one else. |
@kylegoch Can you post your config again? I am having the same issue. Is what you have above working for you? |
I can confirm the fix is to change this:
to this:
The documentation doesn't mention either Thanks @kylegoch! |
Finally found that today and am now using
Locally the state file looks good. |
Finally think I got something working but I don't like the fact you need to hardcode some of it. For reference:
|
@kylegoch yeah - I'm having issues w/ using remote state built from modules and I'm not using workspaces. @downspot - you're using workspaces explicitly or is it the case that you are simply adding the |
@tehmaspc I was using environment on an older version but have since upgraded. In either case however unless I add the |
$ terraform -v main.tf
+++
Can anyone help me to fix the issue? |
I'm having the same problem as well. it seems that this problem was there since 0.9.2. Any idea to solve it? Thanks ➜ terraform version main.tf
result
|
@gotojeffray the module that writes to
This will make |
@mmell thanks for your reply. and the problem was solved by adding output statement in my live/vpc/outputs.tf.
it won't work for modules' output, even there is one 'output vpc_id' {...} defination in my modules/vpc/outputs.tf https://www.terraform.io/docs/providers/terraform/d/remote_state.html
|
I have the same issue if you look above. I’m using the same VPC module from Terraform Registry.
… On Mar 27, 2018, at 13:38, gotojeff ***@***.***> wrote:
@mmell thanks for your reply.
Since I'm using the VPC module here https://github.com/terraform-aws-modules/terraform-aws-vpc/blob/master/outputs.tf, the vpc_id is in the outputs.tf.
outputs.tf
output "vpc_id" {
description = "The ID of the VPC"
value = "${element(concat(aws_vpc.this.*.id, list("")), 0)}"
}
here is my tfstate in S3 bucket.
{
"version": 3,
"terraform_version": "0.11.4",
"serial": 29,
"lineage": "48c08dfd-29a7-fa42-d6ed-13a62ff7a6c5",
"modules": [
{
"path": [
"root"
],
"outputs": {},
"resources": {},
"depends_on": []
},
{
"path": [
"root",
"vpc"
],
"outputs": {
.......
"default_route_table_id": {
"sensitive": false,
"type": "string",
"value": "rtb-503fb32c"
},
"default_security_group_id": {
"sensitive": false,
"type": "string",
"value": "sg-9bcbe3ed"
},
"default_vpc_cidr_block": {
"sensitive": false,
"type": "string",
"value": ""
},
"vpc_id": {
"sensitive": false,
"type": "string",
"value": "vpc-d9c353a2"
},
.......
},
—
You are receiving this because you were mentioned.
Reply to this email directly, view it on GitHub, or mute the thread.
|
@gotojeffray
Ah. Eureka! Just saw your comment edit. Root level outputs. This will most likely solve the issue for me as well. Thanks!
… On Mar 27, 2018, at 13:38, gotojeff ***@***.***> wrote:
@mmell thanks for your reply.
Since I'm using the VPC module here https://github.com/terraform-aws-modules/terraform-aws-vpc/blob/master/outputs.tf, the vpc_id is in the outputs.tf.
outputs.tf
output "vpc_id" {
description = "The ID of the VPC"
value = "${element(concat(aws_vpc.this.*.id, list("")), 0)}"
}
here is my tfstate in S3 bucket.
{
"version": 3,
"terraform_version": "0.11.4",
"serial": 29,
"lineage": "48c08dfd-29a7-fa42-d6ed-13a62ff7a6c5",
"modules": [
{
"path": [
"root"
],
"outputs": {},
"resources": {},
"depends_on": []
},
{
"path": [
"root",
"vpc"
],
"outputs": {
.......
"default_route_table_id": {
"sensitive": false,
"type": "string",
"value": "rtb-503fb32c"
},
"default_security_group_id": {
"sensitive": false,
"type": "string",
"value": "sg-9bcbe3ed"
},
"default_vpc_cidr_block": {
"sensitive": false,
"type": "string",
"value": ""
},
"vpc_id": {
"sensitive": false,
"type": "string",
"value": "vpc-d9c353a2"
},
.......
},
—
You are receiving this because you were mentioned.
Reply to this email directly, view it on GitHub, or mute the thread.
|
@gotojeffray, @tehmaspc If the root module is the one that creates the remote state then the sub-module and the root module must |
Related -> Is it possible for each module to possess it's own remote backend? In our case, I have three modules, one for each provider AWS GCloud and Azure. Each already has a remote state file in its respective cloud storage provider, and each module is already configured by its own |
@Jakexx360 That's really only loosely related to this issue... but the answer you're looking for is that you need to aggregate into one, or operate on outputs from the other states using the Terraform Remote State data source. |
Same issue here! In securitygroup.tf I have:
And in instance.tf I have:
When I run terraform plan I get this error:
|
@cuongvu1992 does adding instance.tf
|
Can someone please advise how to get this to work with the "local" backend? I've been trying despirately, but cant seem to get it to work. In 1st (parent) template:
I have a VPC defined and deployed, and an
I then copied the In 2nd (child) template:
Error messages:
I've been working at this for some time. |
@HebertCL I'm declaring the outputs and trying to consume them via the 2nd template. I havent included everything here, because it's really a lot to include, but I can try. It doesnt seem to matter if I change this line, it doesn't seem to find the details of my remote configuration. It's also really unclear how the relative nature of this reference works. Where is the root of Can anyone provide a working example, as this is sorely missing everywhere I look? I posted in that thread some time ago also. |
@KptnKMan checking again you code snippets, I think I may have found your problem. Looking again at your backend configuration, you seem to be pointing it to
Give it a shot. Hope it solves your problem. |
Looking at the original description here with fresh eyes, I think I see a potential problem. It sounds like the If you wish for the To get there from what @HebertCL reported in the original issue, you'd remove the
After this, other modules can then access the state file using If, on the other hand, you do want to apply all of these together using a single provider "aws" {
region = "${var.aws_region}"
}
module "network" {
source = "./vpc/"
cidr = "${var.cidr}"
subnet_cidr = "${var.subnet_cidr}"
}
module "instance" {
source = "./instance/"
vpc_id = "${module.network.vpc_id}"
subnet_ids = "${module.network.subnet_ids}"
ami_id = "${var.ami_id}"
size = "${var.size}"
logging_key = "${var.logging_key}"
}
module "autoscaling" {
source = "./autoscaling"
vpc_id = "${module.network.vpc_id}"
subnet_ids = "${module.network.subnet_ids}"
image_id = "${var.image_id}"
inst_size = "${var.inst_size}"
ssh_key = "${var.ssh_key}"
ssh_port = "${var.ssh_port}"
http_port = "${var.http_port}"
min_size = "${var.min_size}"
max_size = "${var.max_size}"
} This explicit passing of output values from one module into input variables of another is a crucial part of writing a modular Terraform configuration, because it is from these references that Terraform knows that it needs to complete the creation of the VPC and subnets before it attempts to create the instance and autoscaling group: each of the references you make creates a dependency edge. From reading over the rest of this thread it seems like others here may have different problems with similar symptoms, so we may need to split this up into multiple issues but first I'd like to see what you all make of the above and whether choosing one of the two approaches above addresses the problem. |
@apparentlymart thanks for that - it makes sense. For what it's worth that's now how we do things at Trint. We have a bunch of "core" or "base" Terraform modules that define things like VPCs, ECS clusters, R53 zone, etc. What I refer to as the non- (or infrequently) moving parts. From there we colocate service-specific (think microservice) Terraform configs with the source code for that service. That Terraform config layers on top of the core config by accessing it via remote state - as you suggest. We then manage our environmental separation with workspaces. This all works really really well but it took us a long while to get there; us growing with Terraform as it too figured things out :) I suspect it's very typical to have to blow away a few false starts with Terraform before getting to something that's scalable for one's use case. |
@apparentlymart I just checked your inputs from my end. I absolutely agree with your statements regarding my own configuration. As you said, I can conclude the following:
I think this issue can be marked as resolved. |
I'm going to lock this issue because it has been closed for 30 days ⏳. This helps our maintainers find and focus on the active issues. If you have found a problem that seems similar to this, please open a new issue and complete the issue template so we can capture all the details necessary to investigate further. |
I have recently jumped back to modules with Terraform latest version. I am creating a 2-tier infrastructure using aws instances, autoscaling groups and vpc resources. This is the layout:
Each folder contains a main.tf and vars.tf, with the exception of vpc that also contains an outputs.tf file with the resources I need for autoscaling and instance templates. My remote config in vpc script looks like this:
And the remote state statement in the instance and autoscaling modules is as follows:
I have tested this script alone and it plans, applies correctly and create the desired remote state file I need for my other modules. However, when I load it as module and plan with the rest of the resources I get errors like the following:
This is my main script that contains all three modules:
I have tested creating only my network module too. In this case, no remote state gets created and I still get the above errors when I use the other 2 modules. I also tested commenting the parameters that use the remote state data and confirmed I can plan and apply with no errors. I appreciate any help or guide.
Terraform Version
v0.10.1
v0.10.2
Affected resources
remote state file
Expected behavior
Plan and apply the module script.
Actual behavior
Terraform does not create a remote state and throws errors when planning and applying the module script.
The text was updated successfully, but these errors were encountered: