Skip to content
New issue

Have a question about this project? Sign up for a free GitHub account to open an issue and contact its maintainers and the community.

By clicking “Sign up for GitHub”, you agree to our terms of service and privacy statement. We’ll occasionally send you account related emails.

Already on GitHub? Sign in to your account

Document how RBAC interacts with kube-system components #29177

Closed
jimmycuadra opened this issue Jul 19, 2016 · 32 comments
Closed

Document how RBAC interacts with kube-system components #29177

jimmycuadra opened this issue Jul 19, 2016 · 32 comments
Labels
kind/documentation Categorizes issue or PR as related to documentation. sig/auth Categorizes an issue or PR as relevant to SIG Auth. sig/network Categorizes an issue or PR as relevant to SIG Network.

Comments

@jimmycuadra
Copy link
Contributor

In getting my cluster set up with RBAC, I discovered that Kubernetes system components need to be explicitly allowed to access the API just like any other client. I had to add a ClusterRole and ClusterRoleBinding for the kubelet (i.e. the common name in the k8s nodes' client certificate.) Without them, the nodes could not register themselves and begin handling work.

It's also not clear how RBAC affects the default service accounts. I'm trying to deploy kube-dns with this manifest:

---
apiVersion: "v1"
kind: "Namespace"
metadata:
  name: "kube-system"

---
apiVersion: "v1"
kind: "Service"
metadata:
  name: "kube-dns"
  namespace: "kube-system"
  labels:
    k8s-app: "kube-dns"
    kubernetes.io/cluster-service: "true"
    kubernetes.io/name: "KubeDNS"
spec:
  selector:
    k8s-app: "kube-dns"
  clusterIP: "10.3.0.10"
  ports:
    - name: "dns"
      port: 53
      protocol: "UDP"
    - name: "dns-tcp"
      port: 53
      protocol: "TCP"

---
apiVersion: "extensions/v1beta1"
kind: "Deployment"
metadata:
  name: "kube-dns"
  namespace: "kube-system"
  labels:
    k8s-app: "kube-dns"
    kubernetes.io/cluster-service: "true"
spec:
  replicas: 1
  selector:
    matchLabels:
      k8s-app: "kube-dns"
  template:
    metadata:
      labels:
        k8s-app: "kube-dns"
        kubernetes.io/cluster-service: "true"
    spec:
      containers:
        - name: "kubedns"
          image: "gcr.io/google_containers/kubedns-amd64:1.5"
          resources:
            limits:
              cpu: "100m"
              memory: "200Mi"
            requests:
              cpu: "100m"
              memory: "100Mi"
          livenessProbe:
            httpGet:
              path: "/healthz"
              port: 8080
              scheme: "HTTP"
            initialDelaySeconds: 60
            timeoutSeconds: 5
            successThreshold: 1
            failureThreshold: 5
          readinessProbe:
            httpGet:
              path: "/readiness"
              port: 8081
              scheme: "HTTP"
            initialDelaySeconds: 30
            timeoutSeconds: 5
          args:
            - "--domain=cluster.local."
            - "--dns-port=10053"
          ports:
            - containerPort: 10053
              name: "dns-local"
              protocol: "UDP"
            - containerPort: 10053
              name: "dns-tcp-local"
              protocol: "TCP"
        - name: "dnsmasq"
          image: "gcr.io/google_containers/kube-dnsmasq-amd64:1.3"
          args:
            - "--cache-size=1000"
            - "--no-resolv"
            - "--server=127.0.0.1#10053"
          ports:
            - containerPort: 53
              name: "dns"
              protocol: "UDP"
            - containerPort: 53
              name: "dns-tcp"
              protocol: "TCP"
        - name: "healthz"
          image: "gcr.io/google_containers/exechealthz-amd64:1.0"
          resources:
            limits:
              cpu: "10m"
              memory: "20Mi"
            requests:
              cpu: "10m"
              memory: "20Mi"
          args:
            - "-cmd=nslookup kubernetes.default.svc.cluster.local 127.0.0.1 >/dev/null"
            - "-port=8080"
            - "-quiet"
          ports:
            - containerPort: 8080
              protocol: "TCP"
      dnsPolicy: "Default"

and the pod fails to start, showing this in the kube-dns container's logs:

I0719 08:34:53.090618       1 server.go:91] Using https://10.3.0.1:443 for kubernetes master
I0719 08:34:53.103017       1 server.go:92] Using kubernetes API <nil>
I0719 08:34:53.104573       1 server.go:132] Starting SkyDNS server. Listening on port:10053
I0719 08:34:53.104682       1 server.go:139] skydns: metrics enabled on :/metrics
I0719 08:34:53.104708       1 dns.go:166] Waiting for service: default/kubernetes
I0719 08:34:53.105794       1 logs.go:41] skydns: ready for queries on cluster.local. for tcp://0.0.0.0:10053 [rcache 0]
I0719 08:34:53.105830       1 logs.go:41] skydns: ready for queries on cluster.local. for udp://0.0.0.0:10053 [rcache 0]
E0719 08:34:53.441778       1 reflector.go:216] pkg/dns/dns.go:155: Failed to list *api.Service: the server has asked for the client to provide credentials (get services)
E0719 08:34:53.442089       1 reflector.go:216] pkg/dns/dns.go:154: Failed to list *api.Endpoints: the server has asked for the client to provide credentials (get endpoints)
I0719 08:34:53.443210       1 dns.go:172] Ignoring error while waiting for service default/kubernetes: the server has asked for the client to provide credentials (get services kubernetes). Sleeping 1s before retrying.
I0719 08:34:53.537722       1 dns.go:439] Received DNS Request:kubernetes.default.svc.cluster.local., exact:false
I0719 08:34:53.537854       1 dns.go:539] records:[], retval:[], path:[local cluster svc default kubernetes]
I0719 08:34:53.540060       1 dns.go:439] Received DNS Request:kubernetes.default.svc.cluster.local., exact:false
I0719 08:34:53.540077       1 dns.go:539] records:[], retval:[], path:[local cluster svc default kubernetes]
E0719 08:34:54.443840       1 reflector.go:216] pkg/dns/dns.go:155: Failed to list *api.Service: the server has asked for the client to provide credentials (get services)
I0719 08:34:54.536969       1 dns.go:172] Ignoring error while waiting for service default/kubernetes: the server has asked for the client to provide credentials (get services kubernetes). Sleeping 1s before retrying.
E0719 08:34:54.537063       1 reflector.go:216] pkg/dns/dns.go:154: Failed to list *api.Endpoints: the server has asked for the client to provide credentials (get endpoints)

with that error recurring infinitely.

I added this subject to my full access ClusterRoleBinding:

kind: "ServiceAccount"
name: "default"
namespace: "*"

but that didn't fix it.

Documentation for how to bootstrap RBAC so that all Kubernetes's own components work should be added. If someone can add some details here, I can make a PR to the docs website myself.

@apelisse apelisse added sig/network Categorizes an issue or PR as relevant to SIG Network. kind/documentation Categorizes issue or PR as related to documentation. team/cluster labels Jul 19, 2016
@jimmycuadra
Copy link
Contributor Author

jimmycuadra commented Jul 20, 2016

Some progress on this:

I discovered that the default service account tokens were invalid because the private key for the master components had changed. I had to manually remove the default service accounts and let the system recreate them based on the new private key. This should really be handled better. (Ref #4672, #24928)

Now that the service account's token is itself valid, the errors in kube-dns's container changed to this, which looks like authentication is now succeeding but authorization is failing (progress!):

dns.go:172] Ignoring error while waiting for service default/kubernetes: the server does not allow access to the requested resource (get services kubernetes). Sleeping 1s before retrying.
reflector.go:216] pkg/dns/dns.go:155: Failed to list *api.Service: the server does not allow access to the requested resource (get services)
reflector.go:216] pkg/dns/dns.go:154: Failed to list *api.Endpoints: the server does not allow access to the requested resource (get endpoints)

I've tried another variation on the subject in the ClusterRoleBinding:

 - kind: "ServiceAccount"
    name: "default"
    namespace: "default"

And with explicit entries for each namespace currently in use (kube-dns is being deployed to the kube-system namespace):

  - kind: "ServiceAccount"
    name: "default"
    namespace: "default"
  - kind: "ServiceAccount"
    name: "default"
    namespace: "kube-system"

But the errors are the same with each variation.

@jimmycuadra
Copy link
Contributor Author

cc @ericchiang

@jimmycuadra
Copy link
Contributor Author

@apelisse This issue probably needs an area/auth label.

@apelisse apelisse added the sig/auth Categorizes an issue or PR as relevant to SIG Auth. label Jul 20, 2016
@ericchiang
Copy link
Contributor

cc @kubernetes/sig-auth

Hey @jimmycuadra thanks for opening this issue.

Couple initial points:

  - kind: "User"
    namespace: "kube-system"
    name: "system:serviceaccount:kube-system:default"
  • These issues are general to authorization, not just RBAC. You'd see the same issues if you had similarly restrictive ABAC file.
  • There could probably be general improvement of the auth documents. "Docs with tutorials" is a requirement for RBAC beta and listed in Role-based access control (RBAC) enhancements#2, so we'll probably be tacking a solid attempt at is over this and the next release cycle.

@ericchiang
Copy link
Contributor

Also note that you can generally find people in #sig-auth or #kubernetes-users channels in the kubernetes slack who might be able to provide more real time feedback.

@jimmycuadra
Copy link
Contributor Author

Thanks, Eric! That is very helpful. I was asking on #sig-auth, but I've been working at night the last couple days so there isn't much activity in there when I'm working. Thanks again for the reply.

@nleib
Copy link

nleib commented Aug 18, 2016

I have looked into the RBAC docs, specifically on ClusterRoleBindings. From what I understand in the docs, this bind is used for the whole cluster and not per namespace. For that reason the namespace is omitted from the Objects.

kind: ClusterRoleBinding
apiVersion: rbac.authorization.k8s.io/v1alpha1
metadata:
  name: read-secrets
**subjects:
  - kind: Group # May be "User", "Group" or "ServiceAccount"
    name: manager**
roleRef:
  kind: ClusterRole
  name: secret-reader
  apiVersion: rbac.authorization.k8s.io/v1alpha1

when trying to create such resource (ClusterRoleBinding) it fails with error

The ClusterRoleBinding "read-secrets" is invalid.
subjects[0].namespace: Required value

I have tried this with kubernetes version 1.3.4 (server and client)
Did I miss anything in the docs? or is this something worth opening a new issue?

thanks

@nleib
Copy link

nleib commented Aug 18, 2016

I have doubled check this, and it only happens when using kind: ServiceAccounts. User seems to work without indicating a namespace.

@deads2k
Copy link
Contributor

deads2k commented Aug 18, 2016

I have doubled check this, and it only happens when using kind: ServiceAccounts. User seems to work without indicating a namespace.

ServiceAccounts are namespace scoped subjects, so when you refer to them, you have to specify the namespace of the service account you want to bind.

@dhawal55
Copy link
Contributor

I can't get it to work with any kind - User, Group, ServiceAccount. Did anyone get it to work? What am i doing wrong?

apiVersion: rbac.authorization.k8s.io/v1alpha1
kind: RoleBinding
metadata:
  name: admin-access
  namespace: kube-system
roleRef:
  apiVersion: rbac.authorization.k8s.io/v1alpha1
  kind: ClusterRole
  name: admin-access
subjects:
- kind: User
  name: system:serviceaccount:kube-system:default
  namespace: kube-system
- kind: Group
  name: system:serviceaccount
  namespace: kube-system
- kind: ServiceAccount
  name: default
  namespace: kube-system

@liggitt
Copy link
Member

liggitt commented Aug 29, 2016

@dhawal55 if you want to refer to a service account, use a subject like this:

- kind: ServiceAccount
  name: default
  namespace: kube-system

if you want to refer to all service accounts in a namespace:

- kind: Group
  name: system:serviceaccounts:kube-system

and if you want to refer to all service accounts everywhere:

- kind: Group
  name: system:serviceaccounts

@dhawal55
Copy link
Contributor

I tried it but still getting Forbidden error when trying to access API from
pods. I deleted the pods, service account and secret every time i made
chances to the role bindings.

On Aug 28, 2016 8:18 PM, "Jordan Liggitt" notifications@github.com wrote:

@dhawal55 https://github.com/dhawal55 if you want to refer to a service
account, use a subject like this:

  • kind: ServiceAccount
    name: default
    namespace: kube-system

if you want to refer to all service accounts in a namespace:

  • kind: Group
    name: system:serviceaccounts:kube-system

and if you want to refer to all service accounts everywhere:

  • kind: Group
    name: system:serviceaccounts


You are receiving this because you were mentioned.
Reply to this email directly, view it on GitHub
#29177 (comment),
or mute the thread
https://github.com/notifications/unsubscribe-auth/ADUqlbvLxa9ii4Pv6PxjviFctwP5LLhbks5qkk-igaJpZM4JPfx9
.

@erictune
Copy link
Member

Jordan's comment was so helpful I added it to documentation:

kubernetes/website#1125

@erictune
Copy link
Member

@dhawal55 You should not need to delete pods, service account or secret when you make RBAC changes. The secret contains the toke. whose claims contain the username and the group name, but those should not change over time. If there was a problem with the token, then you would get a 401, not a 403.

You can verify your token has the right username using a command like this:

$ kubectl get secrets default-token-XXXX -o json | jq .data.token | tr -d '"' | base64 -D  | cut -f 2 -d "."  | base64 -D

You should see "sub": "system:serviceaccount:default:default" among other things.

@liggitt
Copy link
Member

liggitt commented Aug 30, 2016

@dhawal55 can you provide details about the API request that is failing? Your example is creating a role binding in the kube-system namespace, which will only grant access to that namespace. If you want to grant the permissions in the referenced role cluster-wide, create a ClusterRoleBinding

@dhawal55
Copy link
Contributor

@erictune thanks for the tip.

@liggitt Yes, that's my issue. I created a role binding which is namespace specific and was referring to cluster-role that gave access to nonResourceURLs which are not namespace specific. I'm splitting my roles so i have a separate clusterRole and clusterRoleBinding for nonResourceURLs. Thank you @deads2k for helping me understand this.

@erictune
Copy link
Member

erictune commented Sep 1, 2016

Ugh. You should not have to have a separate role for nonResourceURLs.

@liggitt
Copy link
Member

liggitt commented Sep 1, 2016

separate rule in the same role should work fine

@liggitt
Copy link
Member

liggitt commented Sep 1, 2016

if you want to give namespace-specific permissions, AND cluster-wide non-resource permissions, then two roles and separate bindings make more sense. In openshift, we have roles like this:

  • a "discovery" clusterrole that allows access to non-resource discovery paths, and is bound to all users at the cluster level
  • an "admin" clusterrole that allows administration of namespaced resources like pods, rcs, etc, and can be bound at the namespace level as desired

@erictune
Copy link
Member

erictune commented Sep 1, 2016

separate rule in the same role should work fine

Yeah, that was what I was thinking should work.

@erictune
Copy link
Member

erictune commented Sep 1, 2016

is bound to all users at the cluster level

I don't know how we would do that in Kubernetes since it is aloof about users. I guess we would have a system:all-authenticated-users group that is auto-populated by apiserver?

@deads2k
Copy link
Contributor

deads2k commented Sep 2, 2016

I don't know how we would do that in Kubernetes since it is aloof about users. I guess we would have a system:all-authenticated-users group that is auto-populated by apiserver?

We have two groups: system:authenticated and system:unauthenticated which are automatically added to the user.Info during the authentication process. I think it provides significant benefit to have these fixed groups when you try to describe permissions you'd like bound to these categories of users.

How about we discuss it during our ad-hoc today? There are intersections with some work from dims involved in shutting down the insecure port: #31491

@erictune
Copy link
Member

erictune commented Sep 6, 2016

Adding system:authenticated and system:unauthenticated seems fine.

@MaksymBilenko
Copy link

MaksymBilenko commented Sep 9, 2016

Hello,
I'm having same issue.
Here is my rules:

kubectl get clusterrole admin-access -o yaml

apiVersion: rbac.authorization.k8s.io/v1alpha1
kind: ClusterRole
metadata:
  creationTimestamp: 2016-09-09T12:42:00Z
  name: admin-access
  resourceVersion: "7308619"
  selfLink: /apis/rbac.authorization.k8s.io/v1alpha1/clusterroles/admin-access
  uid: cd64a977-768a-11e6-9e6c-b8ca3a9469a9
rules:
- apiGroups:
  - '*'
  attributeRestrictions: null
  nonResourceURLs:
  - '*'
  resources:
  - '*'
  verbs:
  - '*'

kubectl --namespace=kube-system get rolebinding admin-access -o yaml

apiVersion: rbac.authorization.k8s.io/v1alpha1
kind: RoleBinding
metadata:
  creationTimestamp: 2016-09-09T13:15:25Z
  name: admin-access
  namespace: kube-system
  resourceVersion: "7311493"
  selfLink: /apis/rbac.authorization.k8s.io/v1alpha1/namespaces/kube-system/rolebindings/admin-access
  uid: 7877adc8-768f-11e6-9e6c-b8ca3a9469a9
roleRef:
  apiVersion: rbac.authorization.k8s.io/v1alpha1
  kind: ClusterRole
  name: admin-access
subjects:
- kind: ServiceAccount
  name: default
  namespace: kube-system

And still having issues with accessing API 403:
curl -k https://xxx.xxx.xxx.xxx:6443 --header "Authorization: Bearer $TOKEN" Forbidden: "/"

Double-checked token...
Same issue if I'm adding User + Group + SA at Role Binding

Any suggestions?

@MaksymBilenko
Copy link

Hmmm,
It works if I'm trying to access starting with this level: apis/rbac.authorization.k8s.io/v1alpha1/namespaces/kube-system
But everything else is not working. I thought that assuming my rules admin-access ClasterRole should grant access to all entries. Please correct me if I'm wrong

@deads2k
Copy link
Contributor

deads2k commented Sep 9, 2016

@MaksymBilenko when you have a non-resource URL, it needs to be bound as a ClusterRole, not a Role. This is because non-resource URLs are logically unnamespaced.

@MaksymBilenko
Copy link

Finally sorted this out, I was not attentive enough. Here is working config for kube-system to make plugins working with RBAC for example kube2sky:
I need to use ClusterRoleBinding instead of RoleBinding

ClusterRole:

kind: ClusterRole
apiVersion: rbac.authorization.k8s.io/v1alpha1
metadata:
  name: admin-access
rules:
  - apiGroups: ["*"]
    resources: ["*"]
    verbs: ["*"]
    nonResourceURLs: ["*"]

ClusterRoleBinding:

apiVersion: rbac.authorization.k8s.io/v1alpha1
kind: ClusterRoleBinding
metadata:
  name: admin-access
subjects:
- kind: ServiceAccount
  name: default
  namespace: kube-system
roleRef:
  apiVersion: rbac.authorization.k8s.io/v1alpha1
  kind: ClusterRole
  name: admin-access

Hope that would be helpful.

@erictune Maybe make sense to add this step to the docs as an example, anyways people would require some plugins like SkyDNS and they might have the same issues

@dhawal55
Copy link
Contributor

dhawal55 commented Sep 9, 2016

RoleBinding gives you permission only within the specified namespace. So
when you had a roleBinding, it gave * access to resources & non-resources
in the kube-system namespace. Since non-resources do not belong to any
namespaces, you didn't get any access to it. ClusterRoleBindings are not
limited to namespaces so access to non-resourceURLs should be granted via
clusterRoleBinding. If you have multiple namespaces, its better to have a
RoleBinding that grants admin access to serviceAccounts, users within
that namespace and a ClusterRoleBinding that grants access to all
nonResourceURLs to all serviceAccounts and users.

On Fri, Sep 9, 2016, 7:00 AM Maksym Bilenko notifications@github.com
wrote:

Finally sorted this out, I was not attentive enough. Here is working
config for kube-system to make plugins working with RBAC for example
kube2sky:
I need to use ClusterRoleBinding instead of RoleBinding

ClusterRole:

kind: ClusterRole
apiVersion: rbac.authorization.k8s.io/v1alpha1
metadata:
name: admin-access
rules:

  • apiGroups: [""]
    resources: ["
    "]
    verbs: [""]
    nonResourceURLs: ["
    "]

ClusterRoleBinding:

apiVersion: rbac.authorization.k8s.io/v1alpha1
kind: ClusterRoleBinding
metadata:
name: admin-access
subjects:

  • kind: ServiceAccount
    name: default
    namespace: kube-system
    roleRef:
    apiVersion: rbac.authorization.k8s.io/v1alpha1
    kind: ClusterRole
    name: admin-access

Hope that would be helpful.

@erictune https://github.com/erictune Maybe make sense to add this step
to the docs as an example, anyways people would require some plugins like
SkyDNS and they might have the same issues


You are receiving this because you were mentioned.
Reply to this email directly, view it on GitHub
#29177 (comment),
or mute the thread
https://github.com/notifications/unsubscribe-auth/ADUqla2RFCGy9fbxIv9c85iUWGozTn0uks5qoWZogaJpZM4JPfx9
.

@kevin-wangzefeng
Copy link
Member

/cc @kubernetes/huawei

k8s-github-robot pushed a commit that referenced this issue Sep 29, 2016
Automatic merge from submit-queue

Allow anonymous API server access, decorate authenticated users with system:authenticated group

When writing authorization policy, it is often necessary to allow certain actions to any authenticated user. For example, creating a service or configmap, and granting read access to all users

It is also frequently necessary to allow actions to any unauthenticated user. For example, fetching discovery APIs might be part of an authentication process, and therefore need to be able to be read without access to authentication credentials.

This PR:
* Adds an option to allow anonymous requests to the secured API port. If enabled, requests to the secure port that are not rejected by other configured authentication methods are treated as anonymous requests, and given a username of `system:anonymous` and a group of `system:unauthenticated`. Note: this should only be used with an `--authorization-mode` other than `AlwaysAllow`
* Decorates user.Info returned from configured authenticators with the group `system:authenticated`.

This is related to defining a default set of roles and bindings for RBAC (kubernetes/enhancements#2). The bootstrap policy should allow all users (anonymous or authenticated) to request the discovery APIs.

```release-note
kube-apiserver learned the '--anonymous-auth' flag, which defaults to true. When enabled, requests to the secure port that are not rejected by other configured authentication methods are treated as anonymous requests, and given a username of 'system:anonymous' and a group of 'system:unauthenticated'. 

Authenticated users are decorated with a 'system:authenticated' group.

NOTE: anonymous access is enabled by default. If you rely on authentication alone to authorize access, change to use an authorization mode other than AlwaysAllow, or or set '--anonymous-auth=false'.
```

c.f. #29177 (comment)
@adam-power
Copy link

I'm experiencing this same issue, but with ABAC rather than RBAC. Was there ever a definitive solution to this problem? Here is the output of my kubedns logs:

I1203 07:12:01.267095       1 dns.go:172] Ignoring error while waiting for service default/kubernetes: the server does not allow access to the requested resource (get services kubernetes). Sleeping 1s before retrying.
E1203 07:12:01.267138       1 reflector.go:214] pkg/dns/dns.go:155: Failed to list *api.Service: the server does not allow access to the requested resource (get services)
E1203 07:12:01.267194       1 reflector.go:214] pkg/dns/dns.go:154: Failed to list *api.Endpoints: the server does not allow access to the requested resource (get endpoints)
E1203 07:12:02.269081       1 reflector.go:214] pkg/dns/dns.go:154: Failed to list *api.Endpoints: the server does not allow access to the requested resource (get endpoints)
I1203 07:12:02.269085       1 dns.go:172] Ignoring error while waiting for service default/kubernetes: the server does not allow access to the requested resource (get services kubernetes). Sleeping 1s before retrying.

And here's what my ABAC file looks like:

{"apiVersion": "abac.authorization.kubernetes.io/v1beta1", "kind": "Policy", "spec": {"user":"*", "nonResourcePath": "*", "readonly": true }}
...
{"apiVersion": "abac.authorization.kubernetes.io/v1beta1", "kind": "Policy", "spec": {"group":"system:serviceaccounts", "namespace": "*", "resource": "*", "apiGroup":"*" }}
{"apiVersion": "abac.authorization.kubernetes.io/v1beta1", "kind": "Policy", "spec": {"group":"system:serviceaccounts", "namespace": "kube-system", "resource": "*", "apiGroup":"*" }}
{"apiVersion": "abac.authorization.kubernetes.io/v1beta1", "kind": "Policy", "spec": {"user":"system:serviceaccount:kube-system:default", "namespace": "*", "resource": "*", "apiGroup":"*" }}
{"apiVersion": "abac.authorization.kubernetes.io/v1beta1", "kind": "Policy", "spec": {"user":"system:serviceaccount:kube-system:default", "namespace": "kube-system", "resource": "*", "apiGroup":"*" }}
...

As you can see, I've tried a few different combinations in there.

I'm also using a basic auth file and certificates, both of those methods seem to be working fine. However I can't figure out how to authorize the default service account for the kube-system namespace. I've tried deleting and recreating the secret, as was mentioned above. Is there something that I need to change in my ABAC file?

@evie404
Copy link
Contributor

evie404 commented Mar 9, 2017

RBAC in 1.6 will include default roles and bindings for components!

staging doc link: https://kubernetes-io-vnext-staging.netlify.com/docs/admin/authorization/rbac/#default-clusterroles-and-clusterrolebindings

@liggitt
Copy link
Member

liggitt commented Apr 5, 2017

default roles and bindings are documented at https://kubernetes.io/docs/admin/authorization/rbac/#default-roles-and-role-bindings

deployments can set up additional bindings if desired, but these form the baseline

@liggitt liggitt closed this as completed Apr 5, 2017
perotinus pushed a commit to kubernetes-retired/cluster-registry that referenced this issue Sep 2, 2017
Automatic merge from submit-queue

Allow anonymous API server access, decorate authenticated users with system:authenticated group

When writing authorization policy, it is often necessary to allow certain actions to any authenticated user. For example, creating a service or configmap, and granting read access to all users

It is also frequently necessary to allow actions to any unauthenticated user. For example, fetching discovery APIs might be part of an authentication process, and therefore need to be able to be read without access to authentication credentials.

This PR:
* Adds an option to allow anonymous requests to the secured API port. If enabled, requests to the secure port that are not rejected by other configured authentication methods are treated as anonymous requests, and given a username of `system:anonymous` and a group of `system:unauthenticated`. Note: this should only be used with an `--authorization-mode` other than `AlwaysAllow`
* Decorates user.Info returned from configured authenticators with the group `system:authenticated`.

This is related to defining a default set of roles and bindings for RBAC (kubernetes/enhancements#2). The bootstrap policy should allow all users (anonymous or authenticated) to request the discovery APIs.

```release-note
kube-apiserver learned the '--anonymous-auth' flag, which defaults to true. When enabled, requests to the secure port that are not rejected by other configured authentication methods are treated as anonymous requests, and given a username of 'system:anonymous' and a group of 'system:unauthenticated'. 

Authenticated users are decorated with a 'system:authenticated' group.

NOTE: anonymous access is enabled by default. If you rely on authentication alone to authorize access, change to use an authorization mode other than AlwaysAllow, or or set '--anonymous-auth=false'.
```

c.f. kubernetes/kubernetes#29177 (comment)
Sign up for free to join this conversation on GitHub. Already have an account? Sign in to comment
Labels
kind/documentation Categorizes issue or PR as related to documentation. sig/auth Categorizes an issue or PR as relevant to SIG Auth. sig/network Categorizes an issue or PR as relevant to SIG Network.
Projects
None yet
Development

No branches or pull requests