Skip to content
New issue

Have a question about this project? Sign up for a free GitHub account to open an issue and contact its maintainers and the community.

By clicking “Sign up for GitHub”, you agree to our terms of service and privacy statement. We’ll occasionally send you account related emails.

Already on GitHub? Sign in to your account

Remote state not created when using modules. #15848

Closed
HebertCL opened this issue Aug 17, 2017 · 43 comments
Closed

Remote state not created when using modules. #15848

HebertCL opened this issue Aug 17, 2017 · 43 comments

Comments

@HebertCL
Copy link

HebertCL commented Aug 17, 2017

I have recently jumped back to modules with Terraform latest version. I am creating a 2-tier infrastructure using aws instances, autoscaling groups and vpc resources. This is the layout:

├───autoscaling
├───instance
├───remote_state
│   └───vpc
└───vpc

Each folder contains a main.tf and vars.tf, with the exception of vpc that also contains an outputs.tf file with the resources I need for autoscaling and instance templates. My remote config in vpc script looks like this:

terraform {
  backend "local" {
    path = "../remote_state/vpc/terraform.tfstate"
  }
}
...

And the remote state statement in the instance and autoscaling modules is as follows:

data "terraform_remote_state" "vpc" {
  backend = "local"
  config {
      path = "../remote_state/vpc/terraform.tfstate" 
  }
}

I have tested this script alone and it plans, applies correctly and create the desired remote state file I need for my other modules. However, when I load it as module and plan with the rest of the resources I get errors like the following:

>terraform plan
Refreshing Terraform state in-memory prior to plan...
The refreshed state will be used to calculate this plan, but will not be
persisted to local or remote state storage.

data.terraform_remote_state.network: Refreshing state...
data.terraform_remote_state.vpc: Refreshing state...
Error running plan: 2 error(s) occurred:

* module.autoscaling.aws_security_group.web: 1 error(s) occurred:

* module.autoscaling.aws_security_group.web: Resource 'data.terraform_remote_state.vpc' does not have attribute 'vpc_id' for variable 'data.terraform_remote_state.vpc.vpc_id'
* module.instance.aws_instance.module_instance: 1 error(s) occurred:

* module.instance.aws_instance.module_instance: Resource 'data.terraform_remote_state.network' does not have attribute 'subnet_id' for variable 'data.terraform_remote_state.network.subnet_id'

This is my main script that contains all three modules:

provider "aws" {
  region = "${var.aws_region}"
}

module "network" {
  source = "./vpc/"

  cidr = "${var.cidr}"
  subnet_cidr = "${var.subnet_cidr}"
}

module "instance" {
  source = "./instance/"

  ami_id = "${var.ami_id}"
  size = "${var.size}"
  logging_key = "${var.logging_key}"
}

module "autoscaling" {
  source = "./autoscaling"

  image_id = "${var.image_id}"
  inst_size = "${var.inst_size}"
  ssh_key = "${var.ssh_key}"
  ssh_port = "${var.ssh_port}"
  http_port ="${var.http_port}"
  min_size = "${var.min_size}"
  max_size ="${var.max_size}"
}

I have tested creating only my network module too. In this case, no remote state gets created and I still get the above errors when I use the other 2 modules. I also tested commenting the parameters that use the remote state data and confirmed I can plan and apply with no errors. I appreciate any help or guide.

Terraform Version

v0.10.1
v0.10.2

Affected resources

remote state file

Expected behavior

Plan and apply the module script.

Actual behavior

Terraform does not create a remote state and throws errors when planning and applying the module script.

@apparentlymart
Copy link
Contributor

Hi @HebertCL! Sorry this isn't working properly.

Thanks for the detailed reproduction steps. Agreed that this is some odd behavior. Hopefully we can duplicate your result here and figure out what's going awry.

@nbering
Copy link

nbering commented Aug 18, 2017

@HebertCL Reading over your (very detailed) description, it is not clear to me if you ran terraform apply on your VPC configuration before running the dependant configuration.

Just wanted to be clear, the terraform remote state data provider is a read-only mechanism, and does not cause terraform to apply the remote configuration. I've seen this as a common misunderstanding in the community.

It's also worth noting that the remote state data provider reads only outputs of the root module in the remote resources. So if you have a module and you want to access one of it's outputs using the data provider, you'll need to add another output block to the root module that references the output of the submodule. See root outputs only from the provider docs.

@HebertCL
Copy link
Author

@nbering I did tried applying only the network module prior applying the other two. It applied correctly but it didn't generated a terraform.tfstate file under my remote_state directory.

The plan output I pasted above was just to show the errors I get when I try to plan or apply even if the network module was previously created.

Thanks for the outputs root outputs reference, this is something new for me that I need to take a look at.

@moos3
Copy link

moos3 commented Aug 21, 2017

I'm running into the same thing with the latest terraform. My code looks like this

data "terraform_remote_state" "core" {
  backend = "s3"

  config {
    bucket = "m-devops-terraform"
    key    = "core-common/terraform.tfstate"
    region = "us-east-1"
  }
}

module "metadata" {
  source = "../modules/metadata/"

  tag_environment  = "${data.terraform_remote_state.core.tag_environment}"
  tag_maintainer   = "${data.terraform_remote_state.core.tag_maintainer}"
  tag_application  = "${data.terraform_remote_state.core.tag_application}"
  tag_run_always   = "true"
  tag_department   = "${data.terraform_remote_state.core.tag_department}"
  tag_project_name = "${data.terraform_remote_state.core.tag_project_name}"
}

module "bastion" {
  source           = "../modules/bastion"
  region           = "${data.terraform_remote_state.core.region}"
  security_groups  = "${data.terraform_remote_state.core.external_ssh},${data.terraform_remote_state.core.internal_ssh}"
  vpc_id           = "${data.terraform_remote_state.core.vpc_id}"
  key_name         = "${data.terraform_remote_state.core.key_name}"
  subnet_id        = "${data.terraform_remote_state.core.external_subnets.0.id}"
  tag_environment  = "${module.metadata.tag_environment}"
  tag_maintainer   = "${module.metadata.tag_maintainer}"
  tag_application  = "${module.metadata.tag_application}"
  tag_run_always   = "true"
  tag_department   = "${module.metadata.tag_department}"
  tag_project_name = "${module.metadata.tag_project_name}"
  name             = "${module.metadata.tag_environment}-ssh"
}

When I look at the state for my core in s3, the outputs are all there.

{
  "version": 3,
  "terraform_version": "0.10.2",
  "serial": 1,
  "lineage": "******************",
  "modules": [
    {
      "path": [
        "root"
      ],
      "outputs": {
        "availability_zones": {
          "sensitive": false,
          "type": "list",
          "value": [
            "us-east-2a",
            "us-east-2b",
            "us-east-2c"
          ]
        },
        "domain_name": {
          "sensitive": false,
          "type": "string",
          "value": "core-net.local"
        },
        "environment": {
          "sensitive": false,
          "type": "string",
          "value": "staging"
        },
        "external_elb": {
          "sensitive": false,
          "type": "string",
          "value": "sg-******************"
        },
        "external_route_tables": {
          "sensitive": false,
          "type": "string",
          "value": "rtb-******************"
        },
        "external_ssh": {
          "sensitive": false,
          "type": "string",
          "value": "sg-******************"
        },
        "external_subnets": {
          "sensitive": false,
          "type": "list",
          "value": [
            "subnet-******************",
            "subnet-******************",
            "subnet-******************"
          ]
        },
        "internal_elb": {
          "sensitive": false,
          "type": "string",
          "value": "sg-******************"
        },
        "internal_route_tables": {
          "sensitive": false,
          "type": "string",
          "value": "rtb-******************,rtb-******************,rtb-******************"
        },
        "internal_ssh": {
          "sensitive": false,
          "type": "string",
          "value": "sg-******************"
        },
        "internal_subnets": {
          "sensitive": false,
          "type": "list",
          "value": [
            "subnet-******************",
            "subnet-******************",
            "subnet-******************"
          ]
        },
        "key_name": {
          "sensitive": false,
          "type": "string",
          "value": "devo******************"
        },
        "log_bucket_id": {
          "sensitive": false,
          "type": "string",
          "value": "******************-core-network-test-staging-logs"
        },
        "region": {
          "sensitive": false,
          "type": "string",
          "value": "us-east-2"
        },
        "tag_application": {
          "sensitive": false,
          "type": "string",
          "value": "core-network"
        },
        "tag_department": {
          "sensitive": false,
          "type": "string",
          "value": "operations"
        },
        "tag_environment": {
          "sensitive": false,
          "type": "string",
          "value": "staging"
        },
        "tag_maintainer": {
          "sensitive": false,
          "type": "string",
          "value": "devops@******************.com"
        },
        "tag_project_name": {
          "sensitive": false,
          "type": "string",
          "value": "d******************-core"
        },
        "vpc_id": {
          "sensitive": false,
          "type": "string",
          "value": "vpc-******************"
        },
        "vpc_security_group": {
          "sensitive": false,
          "type": "string",
          "value": "sg-******************"
        },
        "zone_id": {
          "sensitive": false,
          "type": "string",
          "value": "Z3******************"
        }
      },
      "resources": {
        "data.aws_caller_identity.current": {
          "type": "aws_caller_identity",
          "depends_on": [],
          "primary": {
            "id": "2017-08-18 20:54:45.466849791 +0000 UTC",
            "attributes": {
              "account_id": "******************",
              "arn": "arn:aws:iam::******************3:user/******************.com",
              "id": "2017-08-18 20:54:45.466849791 +0000 UTC",
              "user_id": "AIDAILJPLKR24UNC7CDIK"
            },
            "meta": {},
            "tainted": false
          },
          "deposed": [],
          "provider": ""
        }
      },
      "depends_on": []
    },
    {
      "path": [
        "root",
        "defaults"
      ],
      "outputs": {
        "domain_name_servers": {
          "sensitive": false,
          "type": "string",
          "value": "10.30.0.2"
        }
      },
      "resources": {},
      "depends_on": []
    },

Some reason when calling them from the remote state it can't find them. My output for the "core" remote data looks like below.What am I doing in correctly?

// The region in which the infra lives.
output "region" {
  value = "${var.region}"
}

// The internal route53 zone ID.
output "zone_id" {
  value = "${module.dns.zone_id}"
}

// Security group for internal ELBs.
output "internal_elb" {
  value = "${module.security_groups.internal_elb}"
}

// Security group for external ELBs.
output "external_elb" {
  value = "${module.security_groups.external_elb}"
}

// Security group for internal ELBs.
output "internal_ssh" {
  value = "${module.security_groups.internal_ssh}"
}

// Security group for external ELBs.
output "external_ssh" {
  value = "${module.security_groups.external_ssh}"
}

// Comma separated list of internal subnet IDs.
output "internal_subnets" {
  value = "${module.vpc.internal_subnets}"
}

// Comma separated list of external subnet IDs.
output "external_subnets" {
  value = "${module.vpc.external_subnets}"
}

// S3 bucket ID for ELB logs.
output "log_bucket_id" {
  value = "${module.s3_logs.id}"
}

// The internal domain name, e.g "stack.local".
output "domain_name" {
  value = "${module.dns.name}"
}

// The environment of the stack, e.g "prod".
output "environment" {
  value = "${var.environment}"
}

// The VPC availability zones.
output "availability_zones" {
  value = "${module.vpc.availability_zones}"
}

// The VPC security group ID.
output "vpc_security_group" {
  value = "${module.vpc.security_group}"
}

// The VPC ID.
output "vpc_id" {
  value = "${module.vpc.id}"
}

// Comma separated list of internal route table IDs.
output "internal_route_tables" {
  value = "${module.vpc.internal_rtb_id}"
}

// The external route table ID.
output "external_route_tables" {
  value = "${module.vpc.external_rtb_id}"
}

output "tag_environment" {
  value = "${module.metadata.tag_environment}"
}

output "tag_department" {
  value = "${module.metadata.tag_department}"
}

output "tag_application" {
  value = "${module.metadata.tag_application}"
}

output "tag_maintainer" {
  value = "${module.metadata.tag_maintainer}"
}

output "tag_project_name" {
  value = "${module.metadata.tag_project_name}"
}

output "key_name" {
  value = "${var.key_name}"
}

@tehmaspc
Copy link

tehmaspc commented Nov 5, 2017

Would be nice to get a fix for this. Just hit into this myself :(

@apparentlymart
Copy link
Contributor

Hi all,

So far we're not entirely sure what is going on here. For those of you saying you've hit a similar problem, it would be helpful to see examples of what happens when running Terraform, ideally with the environment variable TF_LOG=trace set to see what's happening internally. (Since the log output is long, please create a gist and share a link to it here.)

@romlinch
Copy link

romlinch commented Nov 15, 2017

Hi all,

I reproduced the issue with terraform 0.10.7, 0.10.8 and 0.11.0-rc1.

I have declared 3 different remote states stored in azure blob storage:

  1. binaries.tfstates
  2. iron-core-dev.tfstates
  3. iron-zone-demo-dev.tfstates.

With a man in the middle I get:

172.18.0.1:58554: GET https://xxxxxxxxxxxxxxxxxxxxxxxxxxxx/states/binaries.tfstate
               << 200 OK 20.53k
172.18.0.1:58546: GET https://xxxxxxxxxxxxxxxxxxxxxxxxxxxx/states/binaries.tfstate
               << 200 OK 20.53k
172.18.0.1:58550: GET https://xxxxxxxxxxxxxxxxxxxxxxxxxxxx/states/binaries.tfstate
               << 200 OK 20.53k

Terraform do not take remote state key property into account. It always use first remote states key.

You can find a gist containing traces and sample tf file: https://gist.github.com/romlinch/ff0754f4b2691c627ba53145ab895725

Hope it will help !

@romlinch
Copy link

It seems to work when we use azurerm backend instead of azure bakend in data "terraform_remote_state"...

@orelhinhas
Copy link

orelhinhas commented Nov 22, 2017

Hi, I have a similar problem, I want to split a big state file into various. The first part, "network" is ok, but when I tried to to apply my "security" session I have this error

Error: Error running plan: 30 error(s) occurred:

* module.sg_oreoneprd.var.vpc_id: Resource 'data.terraform_remote_state.network' does not have attribute 'vpc_id' for variable 'data.terraform_remote_state.network.vpc_id'
* module.sg_oreqkvprd.var.vpc_id: Resource 'data.terraform_remote_state.network' does not have attribute 'vpc_id' for variable 'data.terraform_remote_state.network.vpc_id'

@apparentlymart below is the TF_LOG
(I changed all the sensitive information)

https://gist.github.com/orelhinhas/537b051c0269d0b02c2eb0e45df29300

thanks!

@apparentlymart
Copy link
Contributor

Thanks for the extra details, everyone. I've still not managed to quite get my head around what's going on here but we'll continue looking at it and try to figure it out.

@tehmaspc
Copy link

tehmaspc commented Jan 8, 2018

Any updates on this or ideas w/r/t workarounds? I've got all my infrastructure running using the approved AWS VPC TF registry module backed by S3 and it would be great to leverage remote_state from module properly so that I can glue the rest of my infrastructure together more succinctly using data types.

@kylegoch
Copy link

I just ran into this yesterday as well on version 0.11.2.
My setup is pretty much the same as @moos3 's

@kylegoch
Copy link

kylegoch commented Jan 17, 2018

Is anyone else using workspaces? It looks like the data.terraform_remote_state isnt following the workspace path for the remote state.

I had this and it didnt work:

data "terraform_remote_state" "networking" {
  backend = "s3"
  config {
    bucket         = "bucket"
    key            = "network.json"
    region         = "us-east-2"
  }
}

And changed it to this and now it works:

data "terraform_remote_state" "networking" {
  backend = "s3"
  config {
    bucket         = "bucket"
    key            = "env:/${terraform.workspace}/network.json"
    region         = "us-east-2"
  }
}

@apparentlymart can you confirm that that is the expected behavior for workspaces? That wasnt documented anywhere that I could see (but also new to TF and may have missed it). I would expect the data.terraform_remote_state to follow the workspace the same way terraform.backend.s3 does, but that doesnt seem to be the case.

I can open a seperate issue for that if need be, as I do not know if other having the remote state issue were using workspaces or not.

@serenitus
Copy link

@kylegoch have you see the docs at: https://www.terraform.io/docs/providers/terraform/d/remote_state.html?

Specifically, the 'environment' option. Workspaces now being what environments were (c.f. https://www.terraform.io/docs/state/workspaces.html for the renaming of environments -> workspaces).

Hope this helps.

@kylegoch
Copy link

@serenitus I just realized I never came back up there to update.

So it turns out those docs are wrong (remote_state), and environment really is now workspace despite not being listed in the docs. I add that key to the config and removed the extra prefix from my S3 object and I'm good to go.

Not sure if that helps solved the original issue.

@tehmaspc
Copy link

tehmaspc commented Jan 31, 2018

@kylegoch
It sounds like you guys got this working IF you are using workspaces correct?

@kylegoch
Copy link

@tehmaspc Correct. I had this same issue that started this issue. However, I was using workspaces and got it to work as noted above.

However, it is very possible that other people in this Issue may be experiencing troubles and not using workspaces at all.

I just wanted to provide that I did fix my issue with a the remote state not being created, and hopefully it will help some one else.

@downspot
Copy link

downspot commented Feb 8, 2018

@kylegoch Can you post your config again? I am having the same issue. Is what you have above working for you?

@mmell
Copy link
Contributor

mmell commented Feb 8, 2018

I can confirm the fix is to change this:

data "terraform_remote_state" "other-module" {
  backend = "s3"
  environment = "${terraform.workspace}" # old format is broken in TF 0.11
}

to this:

data "terraform_remote_state" "other-module" {
  backend = "s3"
  workspace = "${terraform.workspace}" # WORKS & IS REQUIRED
}

The documentation doesn't mention either environment or workspace.

Thanks @kylegoch!

@kylegoch
Copy link

kylegoch commented Feb 8, 2018

@downspot My config looks the same as what @mmell posted above. The key being workspace which is not mentioned in the docs.

@downspot
Copy link

downspot commented Feb 8, 2018

Finally found that today and am now using workspace. My next issue is that the remote stored state files are just a blank state as such.

{
    "version": 3,
    "serial": 1,
    "lineage": "9eaeb21e-34c0-492b-aa75-44bab92773cb",
    "modules": [
        {
            "path": [
                "root"
            ],
            "outputs": {},
            "resources": {},
            "depends_on": []
        }
    ]
}

Locally the state file looks good.

@downspot
Copy link

downspot commented Feb 9, 2018

Finally think I got something working but I don't like the fact you need to hardcode some of it. For reference:

data "terraform_remote_state" "network" {
  backend       = "s3"
  workspace     = "${terraform.workspace}"

  config {
    bucket      = "ursa-terraform"
    key         = "terraform.tfstate"
    region      = "${var.region}"
  }
}

terraform {
  backend "s3" {
    bucket  = "ursa-terraform"
    key       = "terraform.tfstate"
    region   = "us-east-1"
  }
}

@tehmaspc
Copy link

tehmaspc commented Feb 9, 2018

@kylegoch yeah - I'm having issues w/ using remote state built from modules and I'm not using workspaces.

@downspot - you're using workspaces explicitly or is it the case that you are simply adding the workspace attribute to terraform_remote_state in order to get remote state from modules to work for you?

@downspot
Copy link

downspot commented Feb 9, 2018

@tehmaspc I was using environment on an older version but have since upgraded. In either case however unless I add the terraform as the second section in the above example I was only able to get blank state files on the remote. It wasn't until I added the terraform that it started working properly. I use workspace locally and without that block it was just storing locally, now it's all remote and works well. Only thing that sucks is having to hardcode inside the terraform block but that's by design.

@pratik705
Copy link

  • I am facing similar issue on my setup:

$ terraform -v
Terraform v0.11.3

main.tf

[...]
//Define startup script
data "template_file" "startup_script" {
  template = "${file("${path.module}/webServer.sh")}"

  vars {
    server_port = "${var.server_port}"
    db_address  = "${data.terraform_remote_state.db.address}"

    //    db_port     = "${data.terraform_remote_state.db.port}"
  }
}

data "terraform_remote_state" "db" {
  backend   = "s3"
  workspace = "${terraform.workspace}"

  config {
    bucket = "${var.remote_state_s3_bucket}"
    key    = "${var.remote_state_db_key}"
    region = "${var.region}"
  }
}
[...]

+++

$ terraform plan
[...]
* module.webApp.data.template_file.startup_script: 1 error(s) occurred:

* module.webApp.data.template_file.startup_script: Resource 'data.terraform_remote_state.db' does not have attribute 'address' for variable 'data.terraform_remote_state.db.address'

Can anyone help me to fix the issue?

@gotojeffray
Copy link

I'm having the same problem as well. it seems that this problem was there since 0.9.2. Any idea to solve it? Thanks

➜ terraform version
Terraform v0.11.5

main.tf

data "terraform_remote_state" "vpc" {
  backend   = "s3"
  workspace = "${terraform.workspace}"

  config {
    bucket  = "terraform-s3-tfstate"
    key     = "vpc/terraform.tfstate"
    region  = "ap-southeast-1"
  }
}

module "security-group" {
  source      = "../../../../terraform-aws-modules/infra/security_group/"
  name        = "ipv4-ipv6-example"
  description = "IPv4 and IPv6 example"
  vpc_id      = "${data.terraform_remote_state.vpc.vpc_id}"
...

}

result

Error: Error running plan: 2 error(s) occurred:

* data.aws_security_group.default: 1 error(s) occurred:

* data.aws_security_group.default: Resource 'data.terraform_remote_state.vpc' does not have attribute 'vpc_id' for variable 'data.terraform_remote_state.vpc.vpc_id'
* module.security-group.var.vpc_id: Resource 'data.terraform_remote_state.vpc' does not have attribute 'vpc_id' for variable 'data.terraform_remote_state.vpc.vpc_id'

@mmell
Copy link
Contributor

mmell commented Mar 27, 2018

@gotojeffray the module that writes to s3://terraform-s3-tfstate/vpc/terraform.tfstate must contain

output "vpc_id" {
  value = "${var.vpc_id}"
}

This will make var.vpc_id available in the remote state. It's mentioned in passing here.

@gotojeffray
Copy link

gotojeffray commented Mar 27, 2018

@mmell thanks for your reply.

and the problem was solved by adding output statement in my live/vpc/outputs.tf.
live/vpc/outputs.tf

output "vpc_id" {
  description = "The ID of the VPC"
  value       = "${module.vpc.vpc_id}"
}

it won't work for modules' output, even there is one 'output vpc_id' {...} defination in my modules/vpc/outputs.tf

https://www.terraform.io/docs/providers/terraform/d/remote_state.html

Root Outputs Only
Only the root level outputs from the remote state are accessible. Outputs from modules within the state cannot be accessed. If you want a module output to be accessible via a remote state, you must thread the output through to a root output.

@tehmaspc
Copy link

tehmaspc commented Mar 27, 2018 via email

@tehmaspc
Copy link

tehmaspc commented Mar 27, 2018 via email

@mmell
Copy link
Contributor

mmell commented Mar 27, 2018

@gotojeffray, @tehmaspc If the root module is the one that creates the remote state then the sub-module and the root module must output "vpc_id"....

@Jtaylorapps
Copy link

Related -> Is it possible for each module to possess it's own remote backend?

In our case, I have three modules, one for each provider AWS GCloud and Azure. Each already has a remote state file in its respective cloud storage provider, and each module is already configured by its own terraform {} block. I was hoping to preserve these states, or must I aggregate all the states into a single place?

@nbering
Copy link

nbering commented Apr 26, 2018

@Jakexx360 That's really only loosely related to this issue... but the answer you're looking for is that you need to aggregate into one, or operate on outputs from the other states using the Terraform Remote State data source.

@cuongvu1992
Copy link

cuongvu1992 commented May 16, 2018

Same issue here! In securitygroup.tf I have:

terraform {
backend "s3" {
bucket = "terraform-state-de78ilkf89"
key = "terraform/demo"
region = "us-east-1"
}
}
output "this_id" {
value = "${aws_security_group.from_usa.id}"
}

And in instance.tf I have:

data "terraform_remote_state" "sg" {
backend = "s3"
config {
bucket = "terraform-state-de78ilkf89"
key = "terraform/demo"
region = "us-east-1"
}
}
output "idInstance" {
value = "${data.terraform_remote_state.sg.this_id}"
}

When I run terraform plan I get this error:

  • output.idInstance: Resource 'data.terraform_remote_state.sg' does not have attribute 'this_id' for variable 'data.terraform_remote_state.sg.this_id'
    I wonder how can I pass output of one tf file to another with remote state?
    Thank you.

@mmell
Copy link
Contributor

mmell commented May 16, 2018

@cuongvu1992 does adding workspace to the remote state help?

instance.tf

data "terraform_remote_state" "sg" {
  backend = "s3"
  workspace = "default" # <<-- Add this
  config {
    bucket = "terraform-state-de78ilkf89"
    key = "terraform/demo"
    region = "us-east-1"
  }
}

output "idInstance" {
  value = "${data.terraform_remote_state.sg.this_id}"
}

Docs: https://www.terraform.io/docs/state/workspaces.html

@KptnKMan
Copy link

KptnKMan commented Aug 3, 2018

Can someone please advise how to get this to work with the "local" backend?

I've been trying despirately, but cant seem to get it to work.

In 1st (parent) template:

terraform {
  backend "local" {
    path = "config/cluster.state.remote"
  }
}

I have a VPC defined and deployed, and an output element configured to output the vpc_id:

module "my_vpc" {
  source               = "github.com/terraform-aws-modules/terraform-aws-vpc"
  name                 = "${var.cluster_name}-deploy-vpc"
  azs                  = "${var.aws_availability_zones}"
  cidr                 = "${var.deploy_cidr}"
  private_subnets      = "${var.private_cidr}"
  public_subnets       = "${var.public_cidr}"
  map_public_ip_on_launch = true

  enable_dns_hostnames = true
  enable_dns_support   = true

  enable_nat_gateway   = true
  single_nat_gateway   = true

  enable_vpn_gateway   = false

  enable_s3_endpoint   = false
  enable_dynamodb_endpoint = false
}

output "vpc_id" {
  value = "${module.my_vpc.vpc_id}"
}

I then copied the cluster.state.remote file to the same config dir of the second template:

In 2nd (child) template:

data "terraform_remote_state" "vpc" {
  backend = "local"

  config {
    path = "config/cluster.state.remote"
  }
}

Error messages:

Error: Error running plan: 2 error(s) occurred:

* aws_security_group.aws_sg: 1 error(s) occurred:
* aws_security_group.aws_sg: Resource 'data.terraform_remote_state.vpc' does not have attribute 'aws_security_group.common_sg.id' for variable 'data.terraform_remote_state.vpc.aws_security_group.common_sg.id'

* aws_security_group.elb_sg: 1 error(s) occurred:
* aws_security_group.elb_sg: Resource 'data.terraform_remote_state.vpc' does not have attribute 'output.vpc_id' for variable 'data.terraform_remote_state.vpc.output.vpc_id'

I've been working at this for some time.
Is there a workaround please?

@HebertCL
Copy link
Author

HebertCL commented Aug 3, 2018

@KptnKMan looking at the errors it's either you're missing to declare those resources as outputs or they are simply not being recognized. A gist with your script might help to figure out which is the issue. Sounds to me it is related to issue #17615.

@KptnKMan
Copy link

KptnKMan commented Aug 3, 2018

@HebertCL I'm declaring the outputs and trying to consume them via the 2nd template.

I havent included everything here, because it's really a lot to include, but I can try.
I've followed the documentation here but the issue seems to be where the line:
path = "${path.module}/../../terraform.tfstate"

It doesnt seem to matter if I change this line, it doesn't seem to find the details of my remote configuration. It's also really unclear how the relative nature of this reference works. Where is the root of ${path.module}?

Can anyone provide a working example, as this is sorely missing everywhere I look?

I posted in that thread some time ago also.

@HebertCL
Copy link
Author

HebertCL commented Aug 3, 2018

@KptnKMan checking again you code snippets, I think I may have found your problem. Looking again at your backend configuration, you seem to be pointing it to config/cluster.state.remote. I am assuming this config folder is in fact an additional folder where the script that creates your vpc and security group resides. So far everything ok.
Next you are declaring a remote state data which is pointing to config/cluster.state.remote. There's some things worth mentioning here:

  • Documentation states that your script using your remote state must point to the place where the tfstate file resides using a relative path. Following that logic means that your child template should be using a path similar to this:
data "terraform_remote_state" "vpc" {
  backend = "local"

  config {
    path = "../path-to-vpc-provision-script/config/cluster.state.remote"
  }
}
  • Although you can always use absolute paths when using remote state configurations, it is recommended to use relative paths since it will make your script reusable.

Give it a shot. Hope it solves your problem.

@apparentlymart
Copy link
Contributor

Looking at the original description here with fresh eyes, I think I see a potential problem.

It sounds like the backend "local" block described is inside the ./vpc module. Backend is a concept that applies to the entire configuration, not to a single module. Therefore it must be defined in the root module in order to take effect.

If you wish for the vpc module to have its own state, then it's necessary to treat that one as a root module itself, rather than accessing it as a child module using a module block.

To get there from what @HebertCL reported in the original issue, you'd remove the module "network" block from the root configuration altogether, and instead run terraform apply in the ./vpc directory:

cd vpc
terraform init
terraform apply

After this, other modules can then access the state file using data "terraform_remote_state".

If, on the other hand, you do want to apply all of these together using a single terraform apply command in the root, you don't need remote state at all: you can (and should) just pass the settings directly between the modules.

provider "aws" {
  region = "${var.aws_region}"
}

module "network" {
  source = "./vpc/"

  cidr        = "${var.cidr}"
  subnet_cidr = "${var.subnet_cidr}"
}

module "instance" {
  source = "./instance/"

  vpc_id      = "${module.network.vpc_id}"
  subnet_ids  = "${module.network.subnet_ids}"
  ami_id      = "${var.ami_id}"
  size        = "${var.size}"
  logging_key = "${var.logging_key}"
}

module "autoscaling" {
  source = "./autoscaling"

  vpc_id     = "${module.network.vpc_id}"
  subnet_ids = "${module.network.subnet_ids}"
  image_id   = "${var.image_id}"
  inst_size  = "${var.inst_size}"
  ssh_key    = "${var.ssh_key}"
  ssh_port   = "${var.ssh_port}"
  http_port  = "${var.http_port}"
  min_size   = "${var.min_size}"
  max_size   = "${var.max_size}"
}

This explicit passing of output values from one module into input variables of another is a crucial part of writing a modular Terraform configuration, because it is from these references that Terraform knows that it needs to complete the creation of the VPC and subnets before it attempts to create the instance and autoscaling group: each of the references you make creates a dependency edge.

From reading over the rest of this thread it seems like others here may have different problems with similar symptoms, so we may need to split this up into multiple issues but first I'd like to see what you all make of the above and whether choosing one of the two approaches above addresses the problem.

@serenitus
Copy link

@apparentlymart thanks for that - it makes sense. For what it's worth that's now how we do things at Trint. We have a bunch of "core" or "base" Terraform modules that define things like VPCs, ECS clusters, R53 zone, etc. What I refer to as the non- (or infrequently) moving parts.

From there we colocate service-specific (think microservice) Terraform configs with the source code for that service. That Terraform config layers on top of the core config by accessing it via remote state - as you suggest.

We then manage our environmental separation with workspaces.

This all works really really well but it took us a long while to get there; us growing with Terraform as it too figured things out :) I suspect it's very typical to have to blow away a few false starts with Terraform before getting to something that's scalable for one's use case.

@HebertCL
Copy link
Author

@apparentlymart I just checked your inputs from my end. I absolutely agree with your statements regarding my own configuration. As you said, I can conclude the following:

  • A child module cannot create a remote state nor it needs to in order to interact with other modules.
  • If a configuration is using a backend, it will always be a root module. This means each module requires its own terraform init and terraform apply commands.
  • If using remote state the order in which root modules are applied matters.

I think this issue can be marked as resolved.

@ghost
Copy link

ghost commented Apr 2, 2020

I'm going to lock this issue because it has been closed for 30 days ⏳. This helps our maintainers find and focus on the active issues.

If you have found a problem that seems similar to this, please open a new issue and complete the issue template so we can capture all the details necessary to investigate further.

@ghost ghost locked and limited conversation to collaborators Apr 2, 2020
Sign up for free to subscribe to this conversation on GitHub. Already have an account? Sign in.
Projects
None yet
Development

No branches or pull requests