User "system:anonymous" cannot proxy services in the namespace "default". #39722

Closed
foxish opened this Issue Jan 11, 2017 · 9 comments

Projects

None yet

6 participants

@foxish
Member
foxish commented Jan 11, 2017

What keywords did you search in Kubernetes issues before filing this one? (If you have found any duplicates, you should instead reply there.): User "system:anonymous" cannot proxy services in the namespace "default".

Kubernetes version (use kubectl version):

~kubectl version --short
Client Version: v1.6.0-alpha.0.2996+add3a08a6d3648
Server Version: v1.6.0-alpha.0.2996+add3a08a6d3648

Environment:

  • Cloud provider or hardware configuration: GCE

What happened:
Unable to access services via the proxy endpoint.

What you expected to happen:
The service to be accessible via /api/v1/proxy/...

How to reproduce it (as minimally and precisely as possible):

  • Check out sources from HEAD
  • make quick-release
  • cluster/kube-up.sh

Cannot access any service via the proxy (see kubectl cluster-info for some examples), for example:

~ curl -k https://<server>/api/v1/proxy/namespaces/kube-system/services/kubernetes-dashboard
User "system:anonymous" cannot proxy services in the namespace "kube-system".

Anything else do we need to know:
This is a regression from 1.5.1.

/cc @kubernetes/sig-auth-misc @kubernetes/sig-network-misc

@foxish foxish added the sig/auth label Jan 11, 2017
@deads2k
Contributor
deads2k commented Jan 11, 2017

I think kube-up.sh now runs RBAC by default. That means that by default, your cluster won't allow full, unprivileged access any more. There are some doc pulls in progress to ease the transition here kubernetes/kubernetes.github.io#2169 and here kubernetes/kubernetes.github.io#1858.

Using the insecure port locally, you should be able to add bindings with the instructions described here https://github.com/kubernetes/kubernetes.github.io/pull/2169/files#diff-48e69c0b942ef9dcc93b90046d09f9e6R488

@cjcullen
Member

I think we need to dig deeper into our 1.5->1.6 permissions story.

I agree that the vast permissions granted by ABAC are terrible and we should do whatever we can to push 1.6 users to set up sane permissions w/ RBAC.

But...
We shouldn't break existing functionality on an upgrade to 1.6 (even if that functionality is something awful like "I need some random service account to be able to exec into system pods"). This could either be a bootstrap one-shot on upgrade to mimic ABAC permissions w/ RBAC bindings (hard, not always possible) or a flag that allows ABAC to stay on (easy, but kinda disappointing).

We also should (maybe) provide an option to create new clusters with the pre-1.6 permission model. Many people have workflows of "Create a cluster at whatever version is released, do a bunch of stuff, tear the cluster down." It'd be nice to give them a nice path forward. That might be "figure out the permissions you actually need and add the bindings in your CI script", or it could just be "set this hacky environment variable to keep ABAC until you figure out the right way."

It looks like https://github.com/kubernetes/kubernetes/pull/39537/files may be a step towards providing a smoother transition, so I'm guessing @deads2k and @liggitt and others have probably put some thought into this already and I'm just catching up.

@thockin
Member
thockin commented Jan 11, 2017
@liggitt
Member
liggitt commented Jan 11, 2017

We shouldn't break existing functionality on an upgrade to 1.6

For existing clusters moving to 1.6, allowing them to continue to use an existing ABAC policy seems reasonable

Also, having RBAC policy that matches the legacy ABAC policy, but is not loaded by default in new clusters makes sense to me. If someone wants the old unrestricted behavior, we can make that possible, but we want all our CI/E2E tests exercising controllers and components using the scoped roles to make sure they are working correctly.

All that said, I don't think this particular issue is related to RBAC... ABAC wasn't set up to give anonymous users proxy access by default.

@cjcullen
Member

Why would kubectl be authenticating as system:anonymous here? That shouldn't have worked before, right?

@foxish
Member
foxish commented Jan 11, 2017

It is curl without a bearer token that's authenticating as system:anonymous. As I understand it, this is expected. The difference I guess is that previously, it would challenge and allow basic auth. Now, trying to access that same kube-dashboard URL lands me at the error 403 page.

@liggitt
Member
liggitt commented Jan 11, 2017 edited

As I understand it, this is expected. The difference I guess is that previously, it would challenge and allow basic auth.

Ah. If you want unauthenticated users to fail authentication (and get a basic-auth prompt if you have password auth turned on) instead of proceeding as anonymous users, start your apiserver with --anonymous-auth=false.

@liggitt liggitt closed this Jan 17, 2017
@gnufied
Member
gnufied commented Jan 18, 2017

The user experience is still weird here IMO. I booted a cluster via ./kube-up.sh and it presents me with those grafana, heapster urls and there is little or no documentation about how I can access them.

Just clicking those urls presents me with - "User "system:anonymous" cannot proxy services in the namespace "default"." error.

If kubernetes knows that the grafana, heapster urls that it is printing on terminal at the end of kube-up.sh command are inaccessible, it should at very minimum point me to instructions about how to open them.

I think, we should keep this bug open.

@foxish
Member
foxish commented Jan 18, 2017

From a UX perspective, I think instead of printing those URLs, we should be recommending that kubectl proxy be run and then those endpoints be accessed through the local proxy.

Sign up for free to join this conversation on GitHub. Already have an account? Sign in to comment