Skip to content
This repository has been archived by the owner on Sep 30, 2020. It is now read-only.

Feature: Add custom IAM policies to worker node pools #340

Closed
cknowles opened this issue Feb 22, 2017 · 14 comments
Closed

Feature: Add custom IAM policies to worker node pools #340

cknowles opened this issue Feb 22, 2017 · 14 comments

Comments

@cknowles
Copy link
Contributor

I'm using some route 53 tooling which requires additions to the worker IAM role. I'd like to be able to able to specify custom policies without altering the stack templates so much.

We've added some similar items like in #297 for common use cases but I feel like each one is tool specific and not sure we should actively support them all. Since it's not possible to attach multiple IAM roles to a single instance that means there's no way to "glue" them like we do with SGs. As a consequence the only neat way to do this without customising the stack templates every time is for kube-aws to provide some sort of policy pass through. Could we investigate a generic way for users to add custom parts to the IAM roles, specifically workers?

@cknowles cknowles changed the title Feature: Add additional policies to worker policy document Feature: Add custom policies to worker policy document Feb 22, 2017
@redbaron
Copy link
Contributor

here we go, our cluster.yaml continues it's journey to become cloudformation.yaml :)

on serious note, with nodepool mega PR merged, we now have facilities to embed nested CF stacks. If we have user-stack.json which is managed by user and never touched by kube-aws apart form including it as a nested stack, will it work for you?

@redbaron
Copy link
Contributor

@c-knowles , out of curiosity, did you evaluate https://github.com/kubernetes/kops/tree/master/dns-controller before choosing https://github.com/fairfaxmedia/area53 ?

@cknowles
Copy link
Contributor Author

cknowles commented Feb 22, 2017

@redbaron I agree with you on the cluster.yaml aspect. I'd prefer if there is a way to link custom pieces into the stack. That's instead of kube-aws directly managing it or needing to edit the kube-aws driven stack templates each time. Editing that way is usually complex and error prone since they change frequently, are hard to read/parse and often it's difficult to have knowledge of every incoming change kube-aws provides.

For nested CF stacks, I'm not sure exactly how that could work as the role can only be a single one per instance. However, multiple policies can be included in a single role so if there's a way to link in there that'd work.

Regarding area53, short answer is nope - kops was less of a thing when I first evaluated and especially that DNS controller part I've not seen before. Longer answer is I continue to evaluate kops in terms of tooling. I took a quick look at the DNS controller now but it seems a bit different to area53. I'm using area53 to automatically add Route 53 entries for some publicly available Services/Ingresses running in a k8s cluster. Also have GitHub PRs automatically deploying and some of those need a host/DNS entry for each.

@mumoshu
Copy link
Contributor

mumoshu commented Feb 22, 2017

@c-knowles

cc @redbaron

Would it work for you if you could define in the imaginary user-stack.json something like:

{
  "AWSTemplateFormatVersion": "2010-09-09",
  "Description": "kube-aws user stack for the control plane stack of {{.ClusterName}}",
  "Parameters" : {
    "ControlPlaneStackName": {
      "Type": "String",
      "Description": "The name of a control-plane stack used to import values into this stack"
    }
  },
  "Resources": {
     "MyPolicy": {
         "Type": "AWS::IAM::Policy",
         "Properties": {
            "PolicyName": "myPolicy",
            "PolicyDocument": {
               "Version" : "2012-10-17",
               "Statement": [ {
                  "Effect": "Allow",
                  "Action": "*",
                  "Resource": "*"
               } ]
            },
            "Roles": [ {
               {"Fn::ImportValue" : {"Fn::Sub" : "${ControlPlaneStackName}-IAMRoleWorker"}}
            } ]
         }
      }
    }
  }
}

@mumoshu
Copy link
Contributor

mumoshu commented Feb 22, 2017

Ah, how to do this consistently among the control plane stack and node pool stacks is an another difficulty 😉

@cknowles
Copy link
Contributor Author

@mumoshu I assume the intent there is simply to link another policy to the same worker role? I believe that would work well. That sort of hook is exactly what I'd hoped for. It would allow quite a lot of customisation, we could even remove baked-in support for some of the other settings such as pulling from ECR, kube2iam... actually anything which is not directly needed by kube-aws we could shift to providing guidance in the docs only.

@mumoshu
Copy link
Contributor

mumoshu commented Mar 2, 2017

@c-knowles

I assume the intent there is simply to link another policy to the same worker role?

Yes.

I'm wondering if #53 could also support your use-case?
The idea is that you can pass an existing IAM role configured as you like by specifying worker.nodePools[].iamRole.Arn:

worker:
  nodePools:
  - name: myManagedPool
    iamRole:
      arn: <your existing IAM role>
    # Other settings follow...

@mumoshu
Copy link
Contributor

mumoshu commented Mar 2, 2017

Updated my last comment a bit

@cknowles
Copy link
Contributor Author

cknowles commented Mar 6, 2017

@mumoshu I'm not sure. I think you mean this as I don't see a way to link roles:

  1. Create separate self-managed role whichever way I choose
  2. Replicate all the existing permissions that kube-aws adds now into that role
  3. Set the role as iamRole as above

If I assumed correctly then yes this would support the use case. The kube-aws permissions don't change very often so it's easy to keep tracking them.

From a user perspective a slightly simpler integration would be allow ManagedPolicyArns for workers to be specified. That way I can role out those policies as needed and link them into workers. This is also how I think we should allow kube2iam, ECR pull and anything not needed to run a k8s cluster by default while keeping permissions minimal.

@mumoshu
Copy link
Contributor

mumoshu commented Mar 7, 2017

@c-knowles Thanks for the confirmation! Yes, your assumption is correct.

Btw, adding a support for ManagedPolicyArns sounds good. I had been somehow misunderstanding it to only allow us to specify default policies managed by AWS. However if it works for user-managed policies, it sounds like a best way to support this use-case.

If we go ahead with it, cluster.yaml would look like:

worker:
  nodePools:
  - name: myManagedPool
    iamRole:
      # arn and managedPolicyArns are mutually exclusive
      #
      # If you'd like to maintain the whole iam role assigned worker nodes, specify `arn` here
      arn: <your existing IAM role>
      # If you'd like to maintain only the additional policies attached to the iam role, specify `managedPolicyArns` here
      managedPolicyArns:
      - <your existing managed policy>

@cknowles
Copy link
Contributor Author

cknowles commented Mar 7, 2017

@mumoshu Yeah, that's definitely AWS's problem not yours! For some reason they named the attribute the same as the name they give to the roles only they can manage. I had the same confusion a while ago, found out I could do this and subsequently forgot when I filed this ticket...

That solution would work well I think and allows users to add policies without us adding direct support for each type of add-on/plugin etc to kube-aws.

@cknowles cknowles changed the title Feature: Add custom policies to worker policy document Feature: Add custom IAM policies to worker node pools Mar 7, 2017
@mumoshu mumoshu added this to the v0.9.6 milestone Mar 24, 2017
@mumoshu
Copy link
Contributor

mumoshu commented Mar 24, 2017

Thanks for confirmation!
Let's sort this out in the next kube-aws release!

@mumoshu
Copy link
Contributor

mumoshu commented May 9, 2017

@c-knowles FYI this feature is implemented as (worker.nodePools[]|controller).iam.role.managedPolicies[].arn via #607(thanks to @Fsero) and available today in the master branch

@mumoshu
Copy link
Contributor

mumoshu commented May 12, 2017

Closing this as resolved but please feel free to reopen if necessary

@mumoshu mumoshu closed this as completed May 12, 2017
Sign up for free to subscribe to this conversation on GitHub. Already have an account? Sign in.
Projects
None yet
Development

No branches or pull requests

3 participants