-
Notifications
You must be signed in to change notification settings - Fork 295
Feature: Add custom IAM policies to worker node pools #340
Comments
here we go, our on serious note, with nodepool mega PR merged, we now have facilities to embed nested CF stacks. If we have |
@c-knowles , out of curiosity, did you evaluate https://github.com/kubernetes/kops/tree/master/dns-controller before choosing https://github.com/fairfaxmedia/area53 ? |
@redbaron I agree with you on the cluster.yaml aspect. I'd prefer if there is a way to link custom pieces into the stack. That's instead of kube-aws directly managing it or needing to edit the kube-aws driven stack templates each time. Editing that way is usually complex and error prone since they change frequently, are hard to read/parse and often it's difficult to have knowledge of every incoming change kube-aws provides. For nested CF stacks, I'm not sure exactly how that could work as the role can only be a single one per instance. However, multiple policies can be included in a single role so if there's a way to link in there that'd work. Regarding area53, short answer is nope - kops was less of a thing when I first evaluated and especially that DNS controller part I've not seen before. Longer answer is I continue to evaluate kops in terms of tooling. I took a quick look at the DNS controller now but it seems a bit different to area53. I'm using area53 to automatically add Route 53 entries for some publicly available Services/Ingresses running in a k8s cluster. Also have GitHub PRs automatically deploying and some of those need a host/DNS entry for each. |
cc @redbaron Would it work for you if you could define in the imaginary {
"AWSTemplateFormatVersion": "2010-09-09",
"Description": "kube-aws user stack for the control plane stack of {{.ClusterName}}",
"Parameters" : {
"ControlPlaneStackName": {
"Type": "String",
"Description": "The name of a control-plane stack used to import values into this stack"
}
},
"Resources": {
"MyPolicy": {
"Type": "AWS::IAM::Policy",
"Properties": {
"PolicyName": "myPolicy",
"PolicyDocument": {
"Version" : "2012-10-17",
"Statement": [ {
"Effect": "Allow",
"Action": "*",
"Resource": "*"
} ]
},
"Roles": [ {
{"Fn::ImportValue" : {"Fn::Sub" : "${ControlPlaneStackName}-IAMRoleWorker"}}
} ]
}
}
}
}
} |
Ah, how to do this consistently among the control plane stack and node pool stacks is an another difficulty 😉 |
@mumoshu I assume the intent there is simply to link another policy to the same worker role? I believe that would work well. That sort of hook is exactly what I'd hoped for. It would allow quite a lot of customisation, we could even remove baked-in support for some of the other settings such as pulling from ECR, kube2iam... actually anything which is not directly needed by kube-aws we could shift to providing guidance in the docs only. |
Yes. I'm wondering if #53 could also support your use-case? worker:
nodePools:
- name: myManagedPool
iamRole:
arn: <your existing IAM role>
# Other settings follow... |
Updated my last comment a bit |
@mumoshu I'm not sure. I think you mean this as I don't see a way to link roles:
If I assumed correctly then yes this would support the use case. The kube-aws permissions don't change very often so it's easy to keep tracking them. From a user perspective a slightly simpler integration would be allow |
@c-knowles Thanks for the confirmation! Yes, your assumption is correct. Btw, adding a support for If we go ahead with it, worker:
nodePools:
- name: myManagedPool
iamRole:
# arn and managedPolicyArns are mutually exclusive
#
# If you'd like to maintain the whole iam role assigned worker nodes, specify `arn` here
arn: <your existing IAM role>
# If you'd like to maintain only the additional policies attached to the iam role, specify `managedPolicyArns` here
managedPolicyArns:
- <your existing managed policy> |
@mumoshu Yeah, that's definitely AWS's problem not yours! For some reason they named the attribute the same as the name they give to the roles only they can manage. I had the same confusion a while ago, found out I could do this and subsequently forgot when I filed this ticket... That solution would work well I think and allows users to add policies without us adding direct support for each type of add-on/plugin etc to kube-aws. |
Thanks for confirmation! |
@c-knowles FYI this feature is implemented as |
Closing this as resolved but please feel free to reopen if necessary |
I'm using some route 53 tooling which requires additions to the worker IAM role. I'd like to be able to able to specify custom policies without altering the stack templates so much.
We've added some similar items like in #297 for common use cases but I feel like each one is tool specific and not sure we should actively support them all. Since it's not possible to attach multiple IAM roles to a single instance that means there's no way to "glue" them like we do with SGs. As a consequence the only neat way to do this without customising the stack templates every time is for kube-aws to provide some sort of policy pass through. Could we investigate a generic way for users to add custom parts to the IAM roles, specifically workers?
The text was updated successfully, but these errors were encountered: