-
Notifications
You must be signed in to change notification settings - Fork 9
Add new provider configuration to fix k8s auth issues #98
New issue
Have a question about this project? Sign up for a free GitHub account to open an issue and contact its maintainers and the community.
By clicking “Sign up for GitHub”, you agree to our terms of service and privacy statement. We’ll occasionally send you account related emails.
Already on GitHub? Sign in to your account
Merged
Conversation
This file contains hidden or bidirectional Unicode text that may be interpreted or compiled differently than what appears below. To review, open the file in an editor that reveals hidden Unicode characters.
Learn more about bidirectional Unicode characters
…ned special access to be able to connect to the cluster to do the initial setup. This can cause issues with terraform where if another user tries to run the terraform they may not have access to the cluster since they are not the initial user. We were able to work around this in the kubernetes terraform by adding an `exec` block which defined a local command to run to get a token to access the cluster (`aws eks get-token`). This was also not ideal because it depends a lot more on the running user's local k8s setup. The fix: We determined that the user-binding-on-cluster-create behaviour also applies to Roles. This commit has code which adds a role with access to create an EKS cluster, and then uses an AWS provider with an alias to assume that role only while running the EKS module. Unfortunately we had to move the creation of the new role into the bootstrap because of an order-of-operations issue with trying to assume a role in a provider that was created in the same tf run.
davidcheung
approved these changes
Sep 17, 2020
sshi100
approved these changes
Sep 17, 2020
bmonkman
added a commit
that referenced
this pull request
Oct 10, 2020
* When creating an EKS cluster, the user who does the creation is assigned special access to be able to connect to the cluster to do the initial setup. This can cause issues with terraform where if another user tries to run the terraform they may not have access to the cluster since they are not the initial user. We were able to work around this in the kubernetes terraform by adding an `exec` block which defined a local command to run to get a token to access the cluster (`aws eks get-token`). This was also not ideal because it depends a lot more on the running user's local k8s setup. The fix: We determined that the user-binding-on-cluster-create behaviour also applies to Roles. This commit has code which adds a role with access to create an EKS cluster, and then uses an AWS provider with an alias to assume that role only while running the EKS module. Unfortunately we had to move the creation of the new role into the bootstrap because of an order-of-operations issue with trying to assume a role in a provider that was created in the same tf run. * Referred to the wrong var for cluster name * Added && to chain deletion in makefile, hopefully we can change this soon so it's not necessary * Fixed reference to allowed_account_ids on k8s side, ran tf fmt * fixed a typo... * remove unnessary file * remove json decode for vpn key * Fixed missing region in pre-k8s make target * Changed vpn namespace references to ensure dependencies * Make sure there is an aws provider at the root of each environment Co-authored-by: Steven Shi <sshi100@hotmail.com>
Sign up for free
to join this conversation on GitHub.
Already have an account?
Sign in to comment
Add this suggestion to a batch that can be applied as a single commit.
This suggestion is invalid because no changes were made to the code.
Suggestions cannot be applied while the pull request is closed.
Suggestions cannot be applied while viewing a subset of changes.
Only one suggestion per line can be applied in a batch.
Add this suggestion to a batch that can be applied as a single commit.
Applying suggestions on deleted lines is not supported.
You must change the existing code in this line in order to create a valid suggestion.
Outdated suggestions cannot be applied.
This suggestion has been applied or marked resolved.
Suggestions cannot be applied from pending reviews.
Suggestions cannot be applied on multi-line comments.
Suggestions cannot be applied while the pull request is queued to merge.
Suggestion cannot be applied right now. Please check back later.
When creating an EKS cluster, the user who does the creation is assigned special access to be able to connect to the cluster to do the initial setup.
This can cause issues with terraform where if another user tries to run the terraform they may not have access to the cluster since they are not the initial user.
We were able to work around this in the kubernetes terraform by adding an
execblock which defined a local command to run to get a token to access the cluster (aws eks get-token).This was also not ideal because it depends a lot more on the running user's local k8s setup.
The fix:
We determined that the user-binding-on-cluster-create behaviour also applies to Roles.
This commit has code which adds a role with access to create an EKS cluster, and then uses an AWS provider with an alias to assume that role only while running the EKS module.
Unfortunately we had to move the creation of the new role into the bootstrap because of an order-of-operations issue with trying to assume a role in a provider that was created in the same tf run.
(closes #95 )