Skip to content
New issue

Have a question about this project? Sign up for a free GitHub account to open an issue and contact its maintainers and the community.

By clicking “Sign up for GitHub”, you agree to our terms of service and privacy statement. We’ll occasionally send you account related emails.

Already on GitHub? Sign in to your account

(eks): need to override kubectl handler lambda IAM role #12107

Closed
1 of 2 tasks
aka-toxa opened this issue Dec 16, 2020 · 20 comments · Fixed by #14689
Closed
1 of 2 tasks

(eks): need to override kubectl handler lambda IAM role #12107

aka-toxa opened this issue Dec 16, 2020 · 20 comments · Fixed by #14689
Labels
@aws-cdk/aws-eks Related to Amazon Elastic Kubernetes Service effort/medium Medium work item – several days of effort feature/enhancement A new API to make things easier or more intuitive. A catch-all for general feature requests. feature-request A feature should be added or improved. p1

Comments

@aka-toxa
Copy link
Contributor

aka-toxa commented Dec 16, 2020

we've noticed that when we want to create the service account through cdk with IAM role attached, the cdk requires us to specify the IAM role that has access to Kubernetes API:

image

Also, we've noticed that the lambda that does kubectl apply tries to assume that role to make kubectl apply command, which is fine.

The one thing that we noticed that when this kubectl handler lambda created it is created with the new role every time with the assume role inline policy.

image
image

Which is could be fine as well but we find that this is not quite secure, because in this case, we need to put "root account" to the trusted entities of kubectlRole to let all that lambdas assume this role.

For dev/qa environments it is fine but for production account, we really don't want to have Kubernetes admin role which can be assumed by anyone who has assumeRole policy we want to tight that kubectl role by trusted entities some how.

Use Case

we use IRSA for Imported clusters service-per-service. Each service has it's own cdk stack with Cluster.addServiceAccount() call. So each service create it's own kubectl handler lambda with it's own lambda role with random name.

our kubectl role should be assumable by only one entity, but in the environment where we have huge amount of lambdas for each service it is not possible to list all that lambda roles in the trusted entity of kubectl role.

Proposed Solution

I see here two possible solutions:

  1. Add an optional parameter with the kubectl handler lambda. so each cdk stack can load that lambda and use the same lambda in a different stacks that creates IRSA. So we create that lambda once with the role and put that role to the trusted entity of kubectlRole.
  2. Add an optional parameter to .addServiceAccount() function to pass the existing role arn for kubectl handler lambda. so we can create that role once, put it inside the trusted entities of kubectl role and all kubectl handlers will run under this role.

Other

  • 👋 I may be able to implement this feature request
  • ⚠️ This feature might incur a breaking change

This is a 🚀 Feature Request

@aka-toxa aka-toxa added feature-request A feature should be added or improved. needs-triage This issue or PR still needs to be triaged. labels Dec 16, 2020
@github-actions github-actions bot added the @aws-cdk/aws-eks Related to Amazon Elastic Kubernetes Service label Dec 16, 2020
@iliapolo
Copy link
Contributor

@aka-toxa You mention:

For dev/qa environments it is fine but for production account, we really don't want to have Kubernetes admin role which can be assumed by anyone who has assumeRole policy we want to tight that kubectl role by trusted entities some how.

I'm wondering what is the specific concern here. Is there a scenario where undesirable roles would have assume role permissions on the kubectl role you pass to the cluster?

@iliapolo iliapolo added the response-requested Waiting on additional info and feedback. Will move to "closing-soon" in 7 days. label Dec 17, 2020
@aka-toxa
Copy link
Contributor Author

we just don't want to have any chances that someone accidentally or not will add assume role policy to something that shouldn't have it. and we want to use "trusted entities" field properly for our kubectl role.

if we put just account root there that would mean that everyone who has permissions to add inline policies to some role/user would have access to k8s master role. With proper trusted entities (like we restrict that kubectl role can be assumed only by some special cdk role) we will ensure that no one will add assume role policy and get access to k8s even accidentally

@iliapolo
Copy link
Contributor

@aka-toxa Thanks, I understand the concern now.

Could you elaborate on proposal number 2 you mentioned?

I was thinking about doing something similar to proposal 1, but instead of exposing the entire lambda handler, just exposing an optional role that would be used as the lambda execution role. You would then create that role once, and use its name as the trusted entity of the kubectl role.

So something like:

const cluster = eks.Cluster.fromClusterAttributes(this, 'Cluster', {
  ...,
  kubectlRole: <role-that-is-mapped-to-k8s-masters>,
  kubectlHandlerRole: <role-that-can-assume-kubectlRole>
});

Would that work for you? This is also makes me think that we might possibly just use kubectlRole as the lambda execution role, so the lambda won't even have to assume another role to operate the cluster.

@github-actions github-actions bot removed the response-requested Waiting on additional info and feedback. Will move to "closing-soon" in 7 days. label Dec 18, 2020
@aka-toxa
Copy link
Contributor Author

yeah I think that also that we can just put kubectlRole and don't spam account with many roles.
sure I will proceed with that bit later
also I think that this change might depends on this PR #12131

@iliapolo
Copy link
Contributor

@aka-toxa Let me know when you start working on this to finalize the approach because i'm still not sure about it.

@iliapolo iliapolo added effort/medium Medium work item – several days of effort p1 and removed needs-triage This issue or PR still needs to be triaged. labels Dec 18, 2020
@rameshmimit
Copy link

rameshmimit commented Mar 2, 2021

I have faced the exact same challenge that @aka-toxa mentioned. If we add root to the trust policy then that role can be assumed by anyone on that account which is not ideal especially for prod environment. As a work around I have created a restricted IAM role and override the lambda execution role using raw overrides (https://docs.aws.amazon.com/cdk/latest/guide/cfn_layer.html) as handler lambda function is not exposed:

// Override the handler lambda role with custom role, instead of using dynamically
  // generated role to have more control over IAM permissions of this role.
  overrideLambdaExecutionRole() {
    const lambdaExecutionRole = this.node.findAll().filter(
      d => d.node.id.includes(
        'Handler'
      )
    )[0].node.defaultChild as lambda.CfnFunction;
    lambdaExecutionRole.addPropertyOverride(
      'Role',
      this.myCustomRoleArn,
    );
  }

But I think, there couple of multiple options to solve it, depends on the user requirement.

  1. Use the same role for handler lambda as kubectl role, as @iliapolo suggested.
  2. Lambda name can be constructed based on some predictable pattern, so that in trust policy it be added. However this also has some flaws, as if you have many lambda in your account, pattern can match multiple lambda's.
  3. Expose the handler lambda construct to override the properties like, role etc.

@aka-toxa
Copy link
Contributor Author

aka-toxa commented Mar 2, 2021

hi @iliapolo are you still happy with the second approach?
I'm going to open PR in a couple of days

@iliapolo
Copy link
Contributor

iliapolo commented Mar 3, 2021

@aka-toxa Just to make sure we are aligned. The plan is basically to use kubectlRole as the execution role of the KubectlProvider handler, which effectively means getting rid of the --role-arn argument:

cmd = [ 'aws', 'eks', 'update-kubeconfig',
'--role-arn', role_arn,
'--name', cluster_name,
'--kubeconfig', kubeconfig

And making sure everything works the same - right?

My only concern is with regards to breaking changes, would this change require any changes to the trusted entities of the kubectlRole? Now that is has to be assumable by Lambda?

@aka-toxa
Copy link
Contributor Author

aka-toxa commented Mar 3, 2021

yes i'm going this path

I believe that role kubectlRole had to be assumable by Lambda initially, so no changes needed

@iliapolo
Copy link
Contributor

iliapolo commented Mar 3, 2021

I believe that role kubectlRole had to be assumable by Lambda initially, so no changes needed

How come? We never used that role as a lambda execution role before. My guess is that requiring root account, like we do now, would also cover the ability of lambda to assume it, but this needs to be verified.

@aka-toxa
Copy link
Contributor Author

aka-toxa commented Mar 3, 2021

let me doublecheck my aws account config

@aka-toxa
Copy link
Contributor Author

aka-toxa commented Mar 3, 2021

yes right I didn't understand your question, sorry...
so now kubectlRole has to be set to be assumable by account root to work with IRSA
so with those changes, nothing will be broken, but the customer will need to tighten trusted entities to lambda service (but this is not required, only if they care about their security :) )

@iliapolo
Copy link
Contributor

iliapolo commented Mar 3, 2021

Sounds good. Lets try 👍

@aka-toxa
Copy link
Contributor Author

hi @iliapolo after an internal round of review of my PR and internal discussing we find that this solution is not great at all.
ideally having IAM role with admin permissions to the cluster we want to make sure that only one particular lambda can assume it.
with this approach, we will be forced to put lambda.amazonaws.com to the trusted entities and allow any lambda get admin access to the cluster.

I think we should revisit this and think of how we can have single lambda for each stack that will use kubectl handler.

@aka-toxa
Copy link
Contributor Author

we have multiple separated stacks that uses kubectl handler and each of them creates new lambda

@aka-toxa
Copy link
Contributor Author

I think about separated function to deploy kubectl handler lambda
then getting this lambda by its ARN and passing it to the Cluster configuration

so we can create lambda, then create kubectl role and put that lambda to the trusted entities and then use that lambda and role wherever we want

what do you think?

@iliapolo
Copy link
Contributor

@aka-toxa

we will be forced to put lambda.amazonaws.com to the trusted entities and allow any lambda get admin access to the cluster.

Agreed thats not good.

I think about separated function to deploy kubectl handler lambda
then getting this lambda by its ARN and passing it to the Cluster configuration

Can you elaborate exactly what you had in mind? Maybe sketch out the theoretical API in a README?

@aka-toxa
Copy link
Contributor Author

yes I'm preparing the draft changes for that, it is a bit tough
will show soon I'm almost done

@ericzbeard ericzbeard added the feature/enhancement A new API to make things easier or more intuitive. A catch-all for general feature requests. label Apr 2, 2021
@aka-toxa
Copy link
Contributor Author

hi @iliapolo i've created another PR with different approach to resolve this

#14689

I've added ability to provide existing kubectl provider lambda to the imported cluster so we can tight kubectl admin role to single lambda and make it more secure

can you please check this approach, if it is ok I will cleanup it, add tests and documentation

@iliapolo iliapolo removed their assignment Jun 27, 2021
@mergify mergify bot closed this as completed in #14689 Dec 19, 2021
mergify bot pushed a commit that referenced this issue Dec 19, 2021
This resolves [issue#12107](#12107) 
we bring the ability to provide existing kubectl provider to the imported cluster
so we can create k8s kubectl role and tight it's trusted entity to single lambda and pass this single lambda to all cdk stacks that works with imported clusters

@iliapolo can you please take a look on this if this approach is fine? if it is I will add documentation and tests 

----

*By submitting this pull request, I confirm that my contribution is made under the terms of the Apache-2.0 license*
@github-actions
Copy link

⚠️COMMENT VISIBILITY WARNING⚠️

Comments on closed issues are hard for our team to see.
If you need more assistance, please either tag a team member or open a new issue that references this one.
If you wish to keep having a conversation with other community members under this issue feel free to do so.

TikiTDO pushed a commit to TikiTDO/aws-cdk that referenced this issue Feb 21, 2022
This resolves [issue#12107](aws#12107) 
we bring the ability to provide existing kubectl provider to the imported cluster
so we can create k8s kubectl role and tight it's trusted entity to single lambda and pass this single lambda to all cdk stacks that works with imported clusters

@iliapolo can you please take a look on this if this approach is fine? if it is I will add documentation and tests 

----

*By submitting this pull request, I confirm that my contribution is made under the terms of the Apache-2.0 license*
Sign up for free to join this conversation on GitHub. Already have an account? Sign in to comment
Labels
@aws-cdk/aws-eks Related to Amazon Elastic Kubernetes Service effort/medium Medium work item – several days of effort feature/enhancement A new API to make things easier or more intuitive. A catch-all for general feature requests. feature-request A feature should be added or improved. p1
Projects
None yet
4 participants