Skip to content
New issue

Have a question about this project? Sign up for a free GitHub account to open an issue and contact its maintainers and the community.

By clicking “Sign up for GitHub”, you agree to our terms of service and privacy statement. We’ll occasionally send you account related emails.

Already on GitHub? Sign in to your account

Hybrid neg with ip_address obtained at runtime in net-ilb-l7 module #1055

Closed
apichick opened this issue Dec 18, 2022 · 9 comments · Fixed by #1061
Closed

Hybrid neg with ip_address obtained at runtime in net-ilb-l7 module #1055

apichick opened this issue Dec 18, 2022 · 9 comments · Fixed by #1061
Assignees

Comments

@apichick
Copy link
Collaborator

In the net-ilb-l7 module I could before create a hybrid neg with an ip_address that was obtained at runtime and now I cannot. I get this error

Error: Invalid for_each argument

  on ../../../../../../modules/net-ilb-l7/main.tf line 154, in resource "google_compute_network_endpoint" "default":
 154:   for_each = local.neg_endpoints
    ├────────────────
    │ local.neg_endpoints will be known only after apply

The "for_each" map includes keys derived from resource attributes that cannot
be determined until apply, and so Terraform cannot determine the full set of
keys that will identify the instances of this resource.

When working with unknown values in for_each, it's better to define the map
keys statically in your configuration and place apply-time results only in
the map values.

Alternatively, you could use the -target planning option to first apply only
the resources that the for_each value depends on, and then apply a second
time to fully converge.```
@ludoo ludoo self-assigned this Dec 18, 2022
@ludoo
Copy link
Collaborator

ludoo commented Dec 18, 2022

Hey Miren, thanks for flagging this. If we want to be able to pass dynamic values to endpoints, their variable type needs to change from list to map. It's a bit unfortunate as keys would be there just to avoid this error, but I can understand the use case is pretty common.

@ludoo
Copy link
Collaborator

ludoo commented Dec 19, 2022

Miren, I just tried this and it runs without issues:

resource "google_compute_address" "test" {
  name         = "neg-test"
  subnetwork   = var.subnet.self_link
  address_type = "INTERNAL"
  address      = "10.0.0.10"
  region       = "europe-west1"
}

module "ilb-l7" {
  source     = "./fabric/modules/net-ilb-l7"
  name       = "ilb-test"
  project_id = var.project_id
  region     = "europe-west1"
  backend_service_configs = {
    default = {
      backends = [{
        balancing_mode = "RATE"
        group          = "my-neg"
        max_rate       = { per_endpoint = 1 }
      }]
    }
  }
  neg_configs = {
    my-neg = {
      gce = {
        zone = "europe-west1-b"
        endpoints = [{
          instance   = "test-1"
          ip_address = google_compute_address.test.address
          # ip_address = "10.0.0.10"
          port = 80
        }]
      }
    }
  }
  vpc_config = {
    network    = var.vpc.self_link
    subnetwork = var.subnet.self_link
  }
}

Can you share the code that is failing for you?

@apichick
Copy link
Collaborator Author

apichick commented Dec 19, 2022

 module "apigee_ilb_l7" {
  source     = "../../../../modules/net-ilb-l7"
  name       = "apigee-ilb"
  project_id = module.apigee_project.project_id
  region     = var.region
  backend_service_configs = {
    default = {
      backends = [{
        balancing_mode = "RATE"
        group          = "my-neg"
        max_rate       = { per_endpoint = 1 }
      }]
    }
  }
  neg_configs = {
    my-neg = {
      hybrid = {
        zone = var.zone
        endpoints = [{
          ip_address = module.onprem_ilb_l7.address
          port       = 80
        }]
      }
    }
  }
  health_check_configs = {
    default = {
      http = {
        port = 80
      }
    }
  }
  vpc_config = {
    network    = module.apigee_vpc.self_link
    subnetwork = module.apigee_vpc.subnet_self_links["${var.region}/subnet"]
  }
  depends_on = [
    module.apigee_vpc.subnets_proxy_only
  ]
}

module "onprem_ilb_l7" {
  source     = "../../../../modules/net-ilb-l7"
  name       = "ilb"
  project_id = module.onprem_project.project_id
  region     = var.region
  backend_service_configs = {
    default = {
      port_name = "http"
      backends = [{
        group     = module.mig.group_manager.instance_group
      }]
    }
  }
  health_check_configs = {
    default = {
      check_interval_sec = 1
      enable_logging = true
      healthy_threshold = 1
      http = {
        port_name = "http"
        port_specification = "USE_NAMED_PORT"
        request_path = "/"
      }
      timeout_sec = 1
      unhealthy_threshold = 1
    }
  }
  vpc_config = {
    network    = module.onprem_vpc.self_link
    subnetwork = module.onprem_vpc.subnet_self_links["${var.region}/subnet"]
  }
  depends_on = [
    module.onprem_vpc.subnets_proxy_only
  ]
}

This, I was emulating on-prem to point my nubrid neg to an ILB in another project. In the previous version of the module it was not failing

@ludoo
Copy link
Collaborator

ludoo commented Dec 19, 2022

So your problem is consuming the forwarding rule address, as address endpoint in a different module? If so I would reserve the address outside and pass it in.

@apichick
Copy link
Collaborator Author

thanks, I'll try that

@ludoo ludoo closed this as completed Dec 20, 2022
@apichick apichick reopened this Dec 21, 2022
@ludoo
Copy link
Collaborator

ludoo commented Dec 21, 2022

Miren, the previous version of the module allowed you to create an address for the forwarding rule. The new version does not do that, since we have an "address" module for that. I think the difference in behaviour comes from that. Try reserving the address outside of the module and passing it in.

@apichick
Copy link
Collaborator Author

apichick commented Dec 21, 2022

I got the same error with

module "apigee_ilb_l7" {
  source     = "../../../../modules/net-ilb-l7"
  name       = "apigee-ilb"
  project_id = module.apigee_project.project_id
  region     = var.region
  backend_service_configs = {
    default = {
      backends = [{
        balancing_mode = "RATE"
        group          = "my-neg"
        max_rate       = { per_endpoint = 1 }
      }]
    }
  }
  neg_configs = {
    my-neg = {
      hybrid = {
        zone = var.zone
        endpoints = [{
          ip_address = google_compute_address.onprem_ilb_l7_ip_address.address
          port       = 80
        }]
      }
    }
  }
  health_check_configs = {
    default = {
      http = {
        port = 80
      }
    }
  }
  vpc_config = {
    network    = module.apigee_vpc.self_link
    subnetwork = module.apigee_vpc.subnet_self_links["${var.region}/subnet"]
  }
  depends_on = [
    module.apigee_vpc.subnets_proxy_only
  ]
}

resource "google_compute_address" "onprem_ilb_l7_ip_address" {
  name         = "onprem-ilb-l7-ip-address"
  subnetwork   = module.onprem_vpc.subnet_self_links["${var.region}/subnet"]
  address_type = "INTERNAL"
  region       = var.region
}

module "onprem_ilb_l7" {
  source     = "../../../../modules/net-ilb-l7"
  name       = "ilb"
  project_id = module.onprem_project.project_id
  region     = var.region
  address    = google_compute_address.onprem_ilb_l7_ip_address.address
  backend_service_configs = {
    default = {
      port_name = "http"
      backends = [{
        group = module.mig.group_manager.instance_group
      }]
    }
  }
  health_check_configs = {
    default = {
      check_interval_sec = 1
      enable_logging     = true
      healthy_threshold  = 1
      http = {
        port_name          = "http"
        port_specification = "USE_NAMED_PORT"
        request_path       = "/"
      }
      timeout_sec         = 1
      unhealthy_threshold = 1
    }
  }
  vpc_config = {
    network    = module.onprem_vpc.self_link
    subnetwork = module.onprem_vpc.subnet_self_links["${var.region}/subnet"]
  }
  depends_on = [
    module.onprem_vpc.subnets_proxy_only
  ]
}

@ludoo
Copy link
Collaborator

ludoo commented Dec 21, 2022

Strange, as I was using the same thing in the example above, just with a single LB. Let me try and reproduce, it will take me a bit as I have a pretty full day.

@ludoo
Copy link
Collaborator

ludoo commented Dec 21, 2022

Ok, I know what the problem is. My example above used a vm neg, this is a hybrid neg and we're using the IP in the keys for the for_each loop. We need to switch that type to a map, it's a bit of a pity as keys will not be used for the actual resources, but it's the only way of doing it.

@ludoo ludoo closed this as completed in 7c95b7c Dec 21, 2022
ludoo added a commit that referenced this issue Dec 21, 2022
ludoo added a commit that referenced this issue Dec 21, 2022
Sign up for free to join this conversation on GitHub. Already have an account? Sign in to comment
Labels
None yet
Projects
None yet
2 participants