Skip to content

Conversation

@bmonkman
Copy link
Contributor

When creating an EKS cluster, the user who does the creation is assigned special access to be able to connect to the cluster to do the initial setup.
This can cause issues with terraform where if another user tries to run the terraform they may not have access to the cluster since they are not the initial user.
We were able to work around this in the kubernetes terraform by adding an exec block which defined a local command to run to get a token to access the cluster (aws eks get-token).
This was also not ideal because it depends a lot more on the running user's local k8s setup.

The fix:
We determined that the user-binding-on-cluster-create behaviour also applies to Roles.
This commit has code which adds a role with access to create an EKS cluster, and then uses an AWS provider with an alias to assume that role only while running the EKS module.
Unfortunately we had to move the creation of the new role into the bootstrap because of an order-of-operations issue with trying to assume a role in a provider that was created in the same tf run.

(closes #95 )

…ned special access to be able to connect to the cluster to do the initial setup.

This can cause issues with terraform where if another user tries to run the terraform they may not have access to the cluster since they are not the initial user.
We were able to work around this in the kubernetes terraform by adding an `exec` block which defined a local command to run to get a token to access the cluster (`aws eks get-token`).
This was also not ideal because it depends a lot more on the running user's local k8s setup.

The fix:
We determined that the user-binding-on-cluster-create behaviour also applies to Roles.
This commit has code which adds a role with access to create an EKS cluster, and then uses an AWS provider with an alias to assume that role only while running the EKS module.
Unfortunately we had to move the creation of the new role into the bootstrap because of an order-of-operations issue with trying to assume a role in a provider that was created in the same tf run.
@bmonkman bmonkman added the bug Something isn't working label Sep 17, 2020
@bmonkman bmonkman merged commit cdd74f3 into main Sep 17, 2020
@bmonkman bmonkman deleted the fix-k8s-provider-auth branch September 17, 2020 22:30
bmonkman added a commit that referenced this pull request Oct 10, 2020
* When creating an EKS cluster, the user who does the creation is assigned special access to be able to connect to the cluster to do the initial setup.
This can cause issues with terraform where if another user tries to run the terraform they may not have access to the cluster since they are not the initial user.
We were able to work around this in the kubernetes terraform by adding an `exec` block which defined a local command to run to get a token to access the cluster (`aws eks get-token`).
This was also not ideal because it depends a lot more on the running user's local k8s setup.

The fix:
We determined that the user-binding-on-cluster-create behaviour also applies to Roles.
This commit has code which adds a role with access to create an EKS cluster, and then uses an AWS provider with an alias to assume that role only while running the EKS module.
Unfortunately we had to move the creation of the new role into the bootstrap because of an order-of-operations issue with trying to assume a role in a provider that was created in the same tf run.

* Referred to the wrong var for cluster name

* Added && to chain deletion in makefile, hopefully we can change this soon so it's not necessary

* Fixed reference to allowed_account_ids on k8s side, ran tf fmt

* fixed a typo...

* remove unnessary file

* remove json decode for vpn key

* Fixed missing region in pre-k8s make target

* Changed vpn namespace references to ensure dependencies

* Make sure there is an aws provider at the root of each environment

Co-authored-by: Steven Shi <sshi100@hotmail.com>
Sign up for free to join this conversation on GitHub. Already have an account? Sign in to comment

Labels

bug Something isn't working

Projects

None yet

Development

Successfully merging this pull request may close these issues.

Fix issues with k8s auth when multiple users run the terraform

4 participants