Skip to content
New issue

Have a question about this project? Sign up for a free GitHub account to open an issue and contact its maintainers and the community.

By clicking “Sign up for GitHub”, you agree to our terms of service and privacy statement. We’ll occasionally send you account related emails.

Already on GitHub? Sign in to your account

fix: Create cluster_private_access security group rules when it should #981

Merged
merged 1 commit into from
Sep 2, 2020

Conversation

anthonydahanne
Copy link
Contributor

@anthonydahanne anthonydahanne commented Aug 18, 2020

PR o'clock

Description

Add cluster_create_security_group condition to create the security group rule named cluster_private_access

Resolves #980
Resolves #942

Checklist

  • CI tests are passing

@anthonydahanne anthonydahanne changed the title Fix #980: Add cluster_create_security_group condition fix: Add cluster_create_security_group condition Aug 18, 2020
@anthonydahanne
Copy link
Contributor Author

as a FYI, I tried it out in my environment and it worked fine!

@barryib barryib self-assigned this Aug 18, 2020
@barryib
Copy link
Member

barryib commented Aug 18, 2020

Great. Can you please address this #942 in your PR also ? We should able to create EKS cluster with private and public access.

@anthonydahanne
Copy link
Contributor Author

@barryib @alexppg : I've added this commit 66a661a , let me know if that solves #942

@alexppg
Copy link

alexppg commented Aug 18, 2020

Nice, it does seem to fix it. I may be able to try it tomorrow. I'll let you know.

@alexppg
Copy link

alexppg commented Aug 19, 2020

It does seem to create it in the plan. I can't apply it to test it right know, but when it's merged to master I'll do it eventually and let you know if there's a problem. Thanks!

Copy link
Member

@barryib barryib left a comment

Choose a reason for hiding this comment

The reason will be displayed to describe this comment to others. Learn more.

LGTM.

Let's wait for @dpiddockcmp feedback.

@barryib barryib changed the title fix: Add cluster_create_security_group condition fix: Create cluster_private_access Security Group when it should Aug 19, 2020
@barryib barryib changed the title fix: Create cluster_private_access Security Group when it should fix: Create cluster_private_access Security Group when it should (only when cluster_create_security_group and cluster_endpoint_private_access are both true) Aug 19, 2020
@barryib barryib changed the title fix: Create cluster_private_access Security Group when it should (only when cluster_create_security_group and cluster_endpoint_private_access are both true) fix: Create cluster_private_access Security Group when it should Aug 19, 2020
@dpiddockcmp
Copy link
Contributor

I hate this damn rule 😆 It keeps generating issues. Plus it breaks the principle of least access by default. We sometimes create a rule that allows access from all IPs that have internal access to the VPC.

I don't see how this relates to var.cluster_create_security_group as it does not interact with that security group at all. This is a rule for the security group created by the EKS platform. Eventually we should re-work the cluster security group to be totally optional creation now that EKS creates one for us. That change would conflict with this rule.

Personally I would prefer a unique flag that explicitly enables this rule with a note that var.cluster_endpoint_private_access_cidrs probably needs modifying. Maybe we should add them to one of the examples?

@barryib
Copy link
Member

barryib commented Aug 26, 2020

I hate this damn rule 😆 It keeps generating issues. Plus it breaks the principle of least access by default. We sometimes create a rule that allows access from all IPs that have internal access to the VPC.

Agree ^^. The actual behavior is totally messed up and doesn't have any sense.

I don't see how this relates to var.cluster_create_security_group as it does not interact with that security group at all. This is a rule for the security group created by the EKS platform.

I my mind, the var.cluster_create_security_group was a flag to allow the creation of all SG and their rules related to the EKS cluster (not only for a single SG) as is mentioned in the variable description Whether to create a security group for the cluster or attach the cluster to cluster_security_group_id. That's why I proposed that flag.

Eventually we should re-work the cluster security group to be totally optional creation now that EKS creates one for us. That change would conflict with this rule.

Agree. I think it's time to rework the security group creation. This was partially discussed in #799 and #792.

Personally I would prefer a unique flag that explicitly enables this rule with a note that var.cluster_endpoint_private_access_cidrs probably needs modifying. Maybe we should add them to one of the examples?

Taking the decision on different flag sounds good to me. But instead of adding new variable, we can:

  • change de default value cluster_endpoint_private_access_cidrs to null
  • and add the SG only when someone define explicitly a list of private access CIDR.
variable "cluster_endpoint_private_access_cidrs" {
  description = "List of CIDR blocks which can access the Amazon EKS private API server endpoint, when public access is disabled"
  type        = list(string)
  default     = null
}

resource "aws_security_group_rule" "cluster_private_access" {
  count       = var.create_eks && var.cluster_endpoint_private_access_cidrs != null && var.cluster_endpoint_private_access ? 1 : 0
  type        = "ingress"
  from_port   = 443
  to_port     = 443
  protocol    = "tcp"
  cidr_blocks = var.cluster_endpoint_private_access_cidrs

  security_group_id = aws_eks_cluster.this[0].vpc_config[0].cluster_security_group_id
}

This will force everyone who want to have a private access to set explicitly their allowed subnets.

@dpiddockcmp
Copy link
Contributor

OK, yes that's a better solution rather than another flag. Although it does raise the risk of the good old ""count" value depends on resource attributes that cannot be determined until apply" error.

@barryib
Copy link
Member

barryib commented Aug 28, 2020

@anthonydahanne if the proposed fix works for you, can you please update your PR accordingly. Thanks again for your time.

Copy link

@otodorov otodorov left a comment

Choose a reason for hiding this comment

The reason will be displayed to describe this comment to others. Learn more.

I have the same issue and this fix looks good to me

@anthonydahanne
Copy link
Contributor Author

hello there!
thanks @barryib and @dpiddockcmp for your inputs!
I'm considering the latest suggestion #981 (comment) and I'm wondering why the null check on cluster_endpoint_private_access_cidrs rather than a boolean check on cluster_endpoint_private_access ?
It's definitely not a big issue to me, even if I'm currently using the default cluster_endpoint_private_access_cidrs (not setting it) I could set it from now on (could probably make my deployment more precise)

@barryib
Copy link
Member

barryib commented Aug 31, 2020

@anthonydahanne

I'm considering the latest suggestion #981 (comment) and I'm wondering why the null check on cluster_endpoint_private_access_cidrs rather than a boolean check on cluster_endpoint_private_access ?

cluster_endpoint_private_access is a variable to enable the endpoint private access. But it doesn't mean you want to create a SG or a SG rule. Some peoples want to manage them by themselves.

I think it's better to add another switch (var.cluster_create_endpoint_private_access_sg_rule) for this rule as @dpiddockcmp suggested #981 (comment) and change the default value of var.cluster_endpoint_private_access_cidrs to null.

variable "cluster_create_endpoint_private_access_sg_rule" {
  description = "Whether to create security group rules for the access to the Amazon EKS private API server endpoint."
  type        = bool
  default     = false
}

variable "cluster_endpoint_private_access_cidrs" {
  description = "List of CIDR blocks which can access the Amazon EKS private API server endpoint."
  type        = list(string)
  default     = null
}

resource "aws_security_group_rule" "cluster_private_access" {
  count       = var.create_eks && var.cluster_create_endpoint_private_access_sg_rule && var.cluster_endpoint_private_access ? 1 : 0
  type        = "ingress"
  from_port   = 443
  to_port     = 443
  protocol    = "tcp"
  cidr_blocks = var.cluster_endpoint_private_access_cidrs

  security_group_id = aws_eks_cluster.this[0].vpc_config[0].cluster_security_group_id
}

@anthonydahanne
Copy link
Contributor Author

@barryib - to me, either way is OK.
I believe it could be a breaking change to users that the new default for cluster_endpoint_private_access_cidrs is null instead of [0.0.0.0/0] but after all the PR is already tagged as breaking; so that may be the time to change this default.
Updating the PR - merci !

@anthonydahanne
Copy link
Contributor Author

anthonydahanne commented Sep 1, 2020

@barryib @dpiddockcmp @otodorov - ready for review! well docs failed because I introduced changes? what's the issue with this check ?

@barryib barryib removed the on hold label Sep 1, 2020
Copy link
Member

@barryib barryib left a comment

Choose a reason for hiding this comment

The reason will be displayed to describe this comment to others. Learn more.

Thanks.

Checks are failing because you don't rebuild docs. Please follow https://github.com/terraform-aws-modules/terraform-aws-eks/#doc-generation to update docs with your new variables.

@otodorov
Copy link

otodorov commented Sep 1, 2020

I believe it could be a breaking change to users that the new default for cluster_endpoint_private_access_cidrs is null instead of [0.0.0.0/0] but after all the PR is already tagged as breaking; so that may be the time to change this default.

Actually, every time when someone wants to upgrade something (no matter what) it is mandatory to check for breaking changes in the new version. Also, this will be mention in the Changelog.

…cess_sg_rule condition

Fix terraform-aws-modules#942: Let public access clusters have private SG
Changing default cluster_endpoint_private_access_cidrs to null

Signed-off-by: Anthony Dahanne <anthony.dahanne@gmail.com>
@anthonydahanne
Copy link
Contributor Author

anthonydahanne commented Sep 1, 2020

now it should be good! happy reviewing / merging ! @otodorov @barryib @dpiddockcmp

@barryib barryib changed the title fix: Create cluster_private_access Security Group when it should fix: Create cluster_private_access security group rules when it should Sep 1, 2020
@anthonydahanne
Copy link
Contributor Author

@barryib Allô! the PR seems to indicate there are changes requested from you, do you want me to update anything? Thanks

@barryib barryib merged commit 1adbe82 into terraform-aws-modules:master Sep 2, 2020
@barryib
Copy link
Member

barryib commented Sep 2, 2020

Just merged.

Thank you @anthonydahanne for your contribution.

@barryib barryib added this to the v13.0.0 milestone Sep 2, 2020
barryib pushed a commit to Polyconseil/terraform-aws-eks that referenced this pull request Oct 25, 2020
…uld (terraform-aws-modules#981)

BREAKING CHANGES: Default for `cluster_endpoint_private_access_cidrs` is now `null` instead of `["0.0.0.0/0"]`. It makes the variable required when `cluster_create_endpoint_private_access_sg_rule` is set to `true`. This will force everyone who want to have a private access to set explicitly their allowed subnets for the sake of the principle of least access by default.
@github-actions
Copy link

I'm going to lock this pull request because it has been closed for 30 days ⏳. This helps our maintainers find and focus on the active issues. If you have found a problem that seems related to this change, please open a new issue and complete the issue template so we can capture all the details necessary to investigate further.

@github-actions github-actions bot locked as resolved and limited conversation to collaborators Nov 17, 2022
Sign up for free to subscribe to this conversation on GitHub. Already have an account? Sign in.
Projects
None yet
5 participants