Skip to content
New issue

Have a question about this project? Sign up for a free GitHub account to open an issue and contact its maintainers and the community.

By clicking “Sign up for GitHub”, you agree to our terms of service and privacy statement. We’ll occasionally send you account related emails.

Already on GitHub? Sign in to your account

core: No interpolation for cross-provider dependencies #12393

Closed
radeksimko opened this issue Mar 2, 2017 · 5 comments
Closed

core: No interpolation for cross-provider dependencies #12393

radeksimko opened this issue Mar 2, 2017 · 5 comments

Comments

@radeksimko
Copy link
Member

radeksimko commented Mar 2, 2017

There's a bunch of reasonable use cases like the one below (e.g. GKE & K8S) and I believe these are all affected by the same issue.

Terraform Version

Terraform v0.9.0-dev (a2d78b62aa97aab6d5f71091754dc983bc68d169)

Terraform Configuration Files

provider "aws" {
  region = "us-east-1"
}

data "aws_availability_zones" "available" {}

resource "random_id" "token" {
  byte_length = 8
}

resource "aws_vpc" "example" {
  cidr_block = "10.8.0.0/23"
  enable_dns_support = true
  enable_dns_hostnames = true
}

resource "aws_internet_gateway" "example" {
  vpc_id = "${aws_vpc.example.id}"
}

resource "aws_route_table" "example" {
  vpc_id = "${aws_vpc.example.id}"

  route {
    cidr_block = "0.0.0.0/0"
    gateway_id = "${aws_internet_gateway.example.id}"
  }
}

resource "aws_route_table_association" "example" {
  count = 2
  subnet_id      = "${aws_subnet.example.*.id[count.index]}"
  route_table_id = "${aws_route_table.example.id}"
}

resource "aws_subnet" "example" {
  count = 2
  availability_zone = "${data.aws_availability_zones.available.names[count.index]}"
  vpc_id                  = "${aws_vpc.example.id}"
  cidr_block              = "10.8.${count.index}.0/24"
  map_public_ip_on_launch = true
}

resource "aws_security_group" "example" {
  vpc_id = "${aws_vpc.example.id}"

  ingress {
    from_port   = 0
    to_port     = 0
    protocol    = "-1"
    cidr_blocks = ["0.0.0.0/0"]
  }

  egress {
    from_port   = 0
    to_port     = 0
    protocol    = "-1"
    cidr_blocks = ["0.0.0.0/0"]
  }
}

resource "aws_db_subnet_group" "default" {
  name = "mydb-${random_id.token.hex}"
  subnet_ids = ["${aws_subnet.example.*.id}"]
}

resource "aws_db_instance" "default" {
  allocated_storage    = 10
  engine               = "mysql"
  engine_version       = "5.7.11"
  instance_class       = "db.t2.micro"
  name                 = "mydb_${random_id.token.hex}"
  username             = "FooFooFoo"
  password             = "barBarBar"
  db_subnet_group_name = "${aws_db_subnet_group.default.name}"
  vpc_security_group_ids = ["${aws_security_group.example.id}"]
  skip_final_snapshot  = true
  parameter_group_name = "default.mysql5.7"
  publicly_accessible  = true
}

provider "mysql" {
    endpoint = "${aws_db_instance.default.endpoint}"
    username = "${aws_db_instance.default.username}"
    password = "${aws_db_instance.default.password}"
}

resource "mysql_database" "app" {
    name = "my_awesome_app"
}

Expected Behavior

Create the RDS db & then let MySQL provider connect to it and create the database.

Actual Behavior

terraform apply
Error running plan: 1 error(s) occurred:

* provider.mysql: dial tcp: missing address

Related PRs

@apparentlymart
Copy link
Member

Hmm I think this must be a 0.9 regression because I have plenty of configs using this sort of setup and have only seen it fail as you showed in your first result, never get stuck as you show in your second.

The error in your first output is the class of problem that Partial Apply is intended to address, but it depends on things working when the resources are already in state.

@radeksimko radeksimko changed the title core: Cross-provider dependency causes dag/walk to get in a loop core: No interpolation for cross-provider dependencies Mar 3, 2017
@radeksimko
Copy link
Member Author

radeksimko commented Mar 3, 2017

@apparentlymart Actually you're right, I somehow missed a popup from my local firewall yesterday asking me to allow the egress to 3306 - that's what caused the hanging on dag/walk. I cannot reproduce that anymore today.

Modified the original issue.

@mitchellh
Copy link
Contributor

mitchellh commented Mar 9, 2017

The graph looks good, you're exactly right that this is really just #8521.

Its a bit late in the game but I'm going to take a look at this and try to judge the difficulty of getting a solution in since this would be very helpful for K8S.

Graph for plan (looks great):

graph

@apparentlymart
Copy link
Member

Let's close this to consolidate to #4149, since I'm pretty sure this issue is just one of the motivators for that proposal.

@ghost
Copy link

ghost commented Apr 14, 2020

I'm going to lock this issue because it has been closed for 30 days ⏳. This helps our maintainers find and focus on the active issues.

If you have found a problem that seems similar to this, please open a new issue and complete the issue template so we can capture all the details necessary to investigate further.

@ghost ghost locked and limited conversation to collaborators Apr 14, 2020
Sign up for free to subscribe to this conversation on GitHub. Already have an account? Sign in.
Projects
None yet
Development

No branches or pull requests

3 participants