Skip to content
New issue

Have a question about this project? Sign up for a free GitHub account to open an issue and contact its maintainers and the community.

By clicking “Sign up for GitHub”, you agree to our terms of service and privacy statement. We’ll occasionally send you account related emails.

Already on GitHub? Sign in to your account

Invalid for_each argument error when using both ec2_ssh_key and source_security_group_ids #37

Closed
xeon0320 opened this issue Oct 14, 2020 · 8 comments · Fixed by #84
Closed
Labels
bug 🐛 An issue with the system

Comments

@xeon0320
Copy link

xeon0320 commented Oct 14, 2020

Describe the Bug

I receive an error 'Error: Invalid for_each argument' when using both ec2_ssh_key and source_security_group_ids in my module declaration.

My module declaration:

module "eks_node_group" {
  source                    = "git::https://github.com/cloudposse/terraform-aws-eks-node-group.git?ref=tags/0.13.0"
  ec2_ssh_key                = "mykeyname"
  source_security_group_ids   = [aws_security_group.management-nodes-sg.id]
  ... 
  }
}

Error message:

Error: Invalid for_each argument on .terraform/modules/eks_node_group/security-group.tf line 28, in resource "aws_security_group_rule" "remote_access_source_sgs_ssh":
  28:   for_each    = local.need_remote_access_sg ? toset(var.source_security_group_ids) : []

The "for_each" value depends on resource attributes that cannot be determined
until apply, so Terraform cannot predict how many instances will be created.
To work around this, use the -target argument to first apply only the
resources that the for_each depends on.

Expected Behavior

terraform plan and apply should complete and use the source_security_group_ids for the remote access

Steps to Reproduce

Steps to reproduce the behavior:

  1. terraform plan produces that error message

Environment (please complete the following information):

Terraform v0.13.4 MacOS
terraform-aws-eks-node-group version 0.13.0

@xeon0320 xeon0320 added the bug 🐛 An issue with the system label Oct 14, 2020
@Nuru
Copy link
Sponsor Contributor

Nuru commented Oct 15, 2020

@osterman I am not sure we are going to fix this. This is a limitation of Terraform and fixing this probably requires removing functionality.

@xeon0320 The security group IDs need to be available in the plan phase. The workaround for this is to create the source_security_group_ids in a separate Terraform project and pass the IDs into this project from a variable.

@osterman osterman pinned this issue Oct 15, 2020
@xeon0320
Copy link
Author

xeon0320 commented Oct 15, 2020

Thanks for the updates. I implemented the workaround yesterday and I was able to move forward.

@ib-ak
Copy link

ib-ak commented Oct 29, 2020

Same behavior with existing_policies_for_eks_workers_role

Error: Invalid for_each argument

  on .terraform\modules\main-aws.eng_m4_large\iam.tf line 82,
in resource "aws_iam_role_policy_attachment" "existing_policies_for_eks_workers_role":
  82:   for_each   = local.enabled ? toset(var.existing_workers_role_policy_arns) : []

The "for_each" value depends on resource attributes that cannot be determined
until apply, so Terraform cannot predict how many instances will be created.
To work around this, use the -target argument to first apply only the
resources that the for_each depends on.

Is there another solution than creating a new project all together?

@kaosmonk
Copy link

kaosmonk commented Dec 1, 2020

Just to add my case

Error: Invalid count argument

  on modules/eks/node-group/security-group.tf line 8, in resource "aws_security_group" "remote_access":
   8:   count       = local.need_remote_access_sg ? 1 : 0

The "count" value depends on resource attributes that cannot be determined
until apply, so Terraform cannot predict how many instances will be created.
To work around this, use the -target argument to first apply only the
resources that the count depends on.

It seems to be the same issue.

I am getting above error when running terraform destroy (EKS assembled with https://github.com/cloudposse/terraform-aws-eks-cluster)

@nnsense
Copy link

nnsense commented Feb 15, 2021

As far as I can see both source_security_group_ids and existing_policies_for_eks_workers_role are referring to an existing object, not part of this the same deployment of which, as the error states, terraform isn't aware of the number before applying and I can't see a solution without changing the list into a static, single entry.

Anyway, at least with the additional policies, the module is outputting the role name, so I just attached my additional policy to the nodes' role.

resource "aws_iam_role_policy_attachment" "test-attach" {
  role            = module.eks_node_group.eks_node_group_role_name
  policy_arn = aws_iam_policy.s3_access_policy.arn
}

@marcelloromani
Copy link

terraform isn't aware of the number before applying

I don't understand this: we're passing a list of ARNs (of existing policies) to the parameter. What can not be checked?

@Nuru
Copy link
Sponsor Contributor

Nuru commented Jun 30, 2021

Following up, I think @kaosmonk 's error was due to a Terraform bug. Terraform 0.13 had a lot of bugs around destroying resources. This should work now with current Terraform.

In general, this module expects most of its inputs to be available during the plan phase of Terraform. Exactly what that means varies a bit with each Terraform version, but for sure it cannot be derived from a resource or module that is creating the value during the apply phase. I think whether or not outputs of data providers are available during the plan phase depends on the version of Terraform, the kind of data provider, and the inputs to the data provider.

@marcelloromani How are you constructing the list of ARNs you are passing in?

@marcelloromani
Copy link

marcelloromani commented Jul 6, 2021

@Nuru

for_each   = local.enabled ? toset(var.existing_workers_role_policy_arns) : []

as far as I can tell we're passing the module a list of hard-coded policy arns

@Nuru Nuru mentioned this issue Aug 29, 2021
@Nuru Nuru closed this as completed in #84 Aug 30, 2021
@nitrocode nitrocode unpinned this issue Nov 11, 2021
Sign up for free to join this conversation on GitHub. Already have an account? Sign in to comment
Labels
bug 🐛 An issue with the system
Projects
None yet
Development

Successfully merging a pull request may close this issue.

6 participants