New issue
Have a question about this project? Sign up for a free GitHub account to open an issue and contact its maintainers and the community.
By clicking “Sign up for GitHub”, you agree to our terms of service and privacy statement. We’ll occasionally send you account related emails.
Already on GitHub? Sign in to your account
move kube-dns to a separate service account #38816
move kube-dns to a separate service account #38816
Conversation
// a role to use for the kube-dns pod | ||
ObjectMeta: api.ObjectMeta{Name: "system:kube-dns"}, | ||
Rules: []rbac.PolicyRule{ | ||
rbac.NewRule("list", "watch").Groups(legacyGroup).Resources("endpoints", "services").RuleOrDie(), |
There was a problem hiding this comment.
Choose a reason for hiding this comment
The reason will be displayed to describe this comment to others. Learn more.
Would kube-dns have the permission to fetch ConfigMap in general with this setup? Fetching ConfigMap might not be a right permission for kube-dns but we introduced it (#36775) as a short term fix for the federation issue.
There was a problem hiding this comment.
Choose a reason for hiding this comment
The reason will be displayed to describe this comment to others. Learn more.
Would kube-dns have the permission to fetch ConfigMap in general with this setup? Fetching ConfigMap might not be a right permission for kube-dns but we introduced it (#36775) as a short term fix for the federation issue.
Like I've set it up here, no. I saw that access and assumed it was a bug. This role is a good starting point and we can have a separate pull to add questionable permissions.
There was a problem hiding this comment.
Choose a reason for hiding this comment
The reason will be displayed to describe this comment to others. Learn more.
If we get the optional config maps in 1.6, we won't need that permission, right?
There was a problem hiding this comment.
Choose a reason for hiding this comment
The reason will be displayed to describe this comment to others. Learn more.
If we get the optional config maps in 1.6, we won't need that permission, right?
The configmap permission could be done as a secondary, optional role attached to your service account. A cluster-admin could add it if he needed the feature. We've done that with a few permissions in openshift. There's a base role that works 90% of the time and if you need more power, you grant more power with a second role: opt-in.
@deads2k Just for my edification, can you explain what RBAC with subdivided permissions means? (And how it related to this PR.) Or if there is something written up, can you point me to it? |
RBAC is an authorizer described here: http://kubernetes.io/docs/admin/authorization/ . It allows you to control permissions based on roles (examples in the doc). By more tightly controlling roles for controllers and powerful components, we can prevent bugs or exploits from causing excessive damage. One of our requirements for beta here: https://docs.google.com/document/d/1woLGRoONE3EBVx-wTb4pvp4CI7tmLZ6lS26VTbosLKM/edit?ts=570fa5c5#heading=h.9kaxh998i32c is to tighten permissions on our components. I found this one by inspecting which subject was making requests without RBAC permissions. In order to create a role and bind it to a proper subject, the subject running the kube-dns needs to separated from other components. This attempts to start that separation. |
So this can be used to reduce the power that stuff running in the context of the DNS pod has, in terms of operations on the Kubernetes API objects? |
Correct. |
I don't know how to do that, but I now know who to ask. For the time being
DNS needs to be able to access configmaps.
…On Thu, Dec 15, 2016 at 1:22 PM, David Eads ***@***.***> wrote:
***@***.**** commented on this pull request.
------------------------------
In plugin/pkg/auth/authorizer/rbac/bootstrappolicy/policy.go
<#38816>:
> @@ -229,6 +229,13 @@ func ClusterRoles() []rbac.ClusterRole {
rbac.NewRule("create").Groups(authorizationGroup).Resources("subjectaccessreviews").RuleOrDie(),
},
},
+ {
+ // a role to use for the kube-dns pod
+ ObjectMeta: api.ObjectMeta{Name: "system:kube-dns"},
+ Rules: []rbac.PolicyRule{
+ rbac.NewRule("list", "watch").Groups(legacyGroup).Resources("endpoints", "services").RuleOrDie(),
If we get the optional config maps in 1.6, we won't need that permission,
right?
The configmap permission could be done as a secondary, optional role
attached to your service account. A cluster-admin could add it if he needed
the feature. We've done that with a few permissions in openshift. There's a
base role that works 90% of the time and if you need more power, you grant
more power with a second role: opt-in.
—
You are receiving this because you were mentioned.
Reply to this email directly, view it on GitHub
<#38816>, or mute the thread
<https://github.com/notifications/unsubscribe-auth/AFVgVHJwKmjFFY9fwlxySICHWzDsrOBuks5rIa-dgaJpZM4LOLx6>
.
|
change looks ok to me modulo the configmap issue |
Rather than expanding the role to include permissions that don't make sense, I'd rather see the work done to support optional configmaps and/or addon tolerating existing objects. That is landing in 1.6, right? |
The permissions are for 1.6. Knowing now what this is for, I agree that adding the permission is the wrong approach. |
In the interim, DNS doesn't work on HEAD?
…On Mon, Dec 19, 2016 at 6:07 AM, David Eads ***@***.***> wrote:
Rather than expanding the role to include permissions that don't make
sense, I'd rather see the work done to support optional configmaps and/or
addon tolerating existing objects. That is landing in 1.6, right?
The permissions are for 1.6. Knowing now what this is for, I agree that
adding the permission is the wrong approach.
—
You are receiving this because you were mentioned.
Reply to this email directly, view it on GitHub
<#38816 (comment)>,
or mute the thread
<https://github.com/notifications/unsubscribe-auth/AFVgVE5HrfAzLguZUH8PLo6rqe-Byrw0ks5rJo-PgaJpZM4LOLx6>
.
|
I think so. We'd create a P1 bug and perhaps provide a manual workaround (adding a new role and binding), but expanding SA rights against the recommended way to use the platform isn't something I want to propagate in our bootstrap authorization rules. |
This seems to be the wrong order of commits. I would prefer not to break HEAD due to a forward dependency on two features (optional configmaps + kube-dns support). Can this PR not wait until those features have been committed? I am concerned about confusion involving kube-dns in the interim period. Anyone debugging issues would have to know about the breakage + fix. |
+1. This PR can wait until the rest are done, then. We don't (knowingly) break head like this. |
I'd rather hold this PR then set up an overly broad role for kube DNS |
actually, isn't kube-dns broken in HEAD today with authorization enabled if it is assuming it has access to all configmaps? seems like the instructions to make kube-dns use a configmap could include granting authorization to read that configmap |
It breaks as soon as you stop granting all permissions to service accounts, which is a permission that should definitely be removed. API security that equates the power to create a pod with root power against the API isn't very effective. |
Any chance we could give kube-dns the configmap permission for now with a P1 issue that explains why and when we should do it the right way when we can? (I know, I know, long-term it's the wrong thing to do, but anyway?) I think it was suggested already, but that's what I think should do, in order to get this merged. |
you can give it the configmap permission with a second namespaced role grant already:
I wouldn't want to add a global configmap permission to this role |
@liggitt Yes, so why don't we do that for now along with an issue to completely remove that dep? |
bfbdcf2
to
3a5b8fc
Compare
These scripts are a nightmare. Somehow this pull is breaking either the authn or authz setup. |
cluster/gce/gci/configure-helper.sh
Outdated
@@ -1195,8 +1195,10 @@ function start-kube-addons { | |||
setup-addon-manifests "addons" "dns" | |||
local -r dns_controller_file="${dst_dir}/dns/kubedns-controller.yaml" | |||
local -r dns_svc_file="${dst_dir}/dns/kubedns-svc.yaml" | |||
local -r dns_sa_file="${dst_dir}/dns/kubedns-sa.yaml" |
There was a problem hiding this comment.
Choose a reason for hiding this comment
The reason will be displayed to describe this comment to others. Learn more.
Alright, I'm betting I borked the add-on manager somehow. @cjcullen @ncdc @mikedanese anything obvious?
There was a problem hiding this comment.
Choose a reason for hiding this comment
The reason will be displayed to describe this comment to others. Learn more.
I think you don't need this line local -r dns_sa_file="${dst_dir}/dns/kubedns-sa.yaml"
, because setup-addon-manifests "addons" "dns"
already copy all the files.
And it seems like "kube-addon-manager.yaml" is not copied on to the master, which means start_kube_addons() met some errors in the middle.
cluster/gce/gci/configure-helper.sh
Outdated
mv "${dst_dir}/dns/kubedns-controller.yaml.in" "${dns_controller_file}" | ||
mv "${dst_dir}/dns/kubedns-svc.yaml.in" "${dns_svc_file}" | ||
mv "${dst_dir}/dns/kubedns-sa.yaml" "${dns_sa_file}" |
There was a problem hiding this comment.
Choose a reason for hiding this comment
The reason will be displayed to describe this comment to others. Learn more.
I think you are moving "${dst_dir}/dns/kubedns-sa.yaml" into "${dst_dir}/dns/kubedns-sa.yaml", which are the same file, so it broke.
There was a problem hiding this comment.
Choose a reason for hiding this comment
The reason will be displayed to describe this comment to others. Learn more.
I think you are moving "${dst_dir}/dns/kubedns-sa.yaml" into "${dst_dir}/dns/kubedns-sa.yaml", which are the same file, so it broke.
Thanks!
@k8s-bot kops aws e2e test this |
@deads2k I've seen some of the other addons include the service account in the same yaml file as a separate record... would remove the need for any of the script changes, and make it so anyone else who was using the add-on would get the change automatically. |
There was a problem hiding this comment.
Choose a reason for hiding this comment
The reason will be displayed to describe this comment to others. Learn more.
/approve
@deads2k please squash though |
4de64df
to
36b586d
Compare
Squashed. Didn't add the SA to the RC file. Is that a thing we want to do? Seems a little weird. |
I don't feel strongly either way |
tagging per #38816 (review) #38816 (review) |
Automatic merge from submit-queue |
Automatic merge from submit-queue Moves dns-horizontal-autoscaler to a separate service account Similar to #38816. As one of the cluster add-ons, dns-horizontal-autoscaler is now using the default service account in kube-system namespace, which is introduced by https://github.com/kubernetes/kubernetes/blob/master/cluster/addons/e2e-rbac-bindings/random-addon-grabbag.yaml for the ease of transition. This default service account will be removed in the future. This PR subdivides dns-horizontal-autoscaler to a separate service account and setup the necessary permissions. @bowei **Release note**: ```release-note NONE ```
Switches the kubedns addon to run as a separate service account so that we can subdivide RBAC permission for it. The RBAC permissions will need a little more refinement which I'm expecting to find in #38626 .
@cjcullen @kubernetes/sig-auth since this is directly related to enabling RBAC with subdivided permissions
@thockin @kubernetes/sig-network since this directly affects now kubedns is added.