Skip to content
New issue

Have a question about this project? Sign up for a free GitHub account to open an issue and contact its maintainers and the community.

By clicking “Sign up for GitHub”, you agree to our terms of service and privacy statement. We’ll occasionally send you account related emails.

Already on GitHub? Sign in to your account

Mistaken auth header name #27

Closed
Fly-Luck opened this issue Jun 6, 2019 · 6 comments
Closed

Mistaken auth header name #27

Fly-Luck opened this issue Jun 6, 2019 · 6 comments
Labels
lifecycle/rotten Denotes an issue or PR that has aged beyond stale and will be auto-closed.

Comments

@Fly-Luck
Copy link

Fly-Luck commented Jun 6, 2019

Problem

Even if correct ServiceAccount access was established(ServiceAccount + ClusterRoleBinding + ClusterRole or ServiceAccount + RoleBinding + Role) and the pod was referring to the serviceAccountName, the pod still cannot call k8s api inside the cluster(403 Forbidden from k8s api server). e.g.

{"level":"error","msg":"Status: 403 Forbidden, Body: {\"kind\":\"Status\",\"apiVersion\":\"v1\",\"metadata\":{},\"status\":\"Failure\",\"message\":\"notebooks.kubeflow.org is forbidden: User \\\"system:anonymous\\\" cannot list resource \\\"notebooks\\\" in API group \\\"kubeflow.org\\\" in the namespace \\\"default\\\"\",\"reason\":\"Forbidden\",\"details\":{\"group\":\"kubeflow.org\",\"kind\":\"notebooks\"},\"code\":403}\n","time":"2019-06-06T02:57:08Z"}

Code to call the k8s api:

		cfg, err := config.InClusterConfig()
		if err != nil {
			panic(err.Error())
		}
                k8sClient = client.NewAPIClient(cfg)
                // the code to list a CustomResource Object
                ......

Temporary Fix

Simple, I just replaced the original header with another header name(Authentication->Authorization), and the api call succeeded:

                MistakenTokenHeader := "Authentication"
                CorrectTokenHeader := "Authorization"
                TokenValuePrefix = "Bearer "
                SATokenPath := "/var/run/secrets/kubernetes.io/serviceaccount/token"
		cfg, err := config.InClusterConfig()
		if err != nil {
			panic(err.Error())
		}
		tokenValue := cfg.DefaultHeader[MistakenTokenHeader]
		// this should never happen, just in case
		if len(tokenValue) == 0 {
			tv, err := ioutil.ReadFile(SATokenPath)
			if err != nil {
				panic(err.Error())
			}
			tokenValue = TokenValuePrefix + string(tv)
		}
		defaultHeader := map[string]string{CorrectTokenHeader : tokenValue}
		cfg.DefaultHeader = defaultHeader
                k8sClient = client.NewAPIClient(cfg)

Question

Is there anything wrong with my client code or is this an issue of kubernetes-client?

@rsreeni
Copy link

rsreeni commented Aug 12, 2019

I am getting the same error. Unfortunately, even the workaround provided above doesn't seem to work for me.

Error Reading abc-svc-config-map ConfigMap Status: 403 Forbidden, Body: {"kind":"Status","apiVersion":"v1", "metadata":{},"status":"Failure","message":"configmaps \"abc-svc-config-map\" is forbidden: User \"system:serviceaccount:test:api-access-sa\" cannot get resource \"configmaps\" in API group \"\" at the cluster scope","reason":"Forbidden","details":{"name":"abc-svc-config-map","kind":"configmaps"},"code":403}

am I missing something ? Is there any fix in progress ?

Thanks,

-SR

@rsreeni
Copy link

rsreeni commented Aug 13, 2019

I found issue why the above workaround was not working. The ClusterRoleBinding was not applied to the role in the namespace the pod was running. Now that I have added, the workaround is working and I am able to reach the API Server.

@fejta-bot
Copy link

Issues go stale after 90d of inactivity.
Mark the issue as fresh with /remove-lifecycle stale.
Stale issues rot after an additional 30d of inactivity and eventually close.

If this issue is safe to close now please do so with /close.

Send feedback to sig-testing, kubernetes/test-infra and/or fejta.
/lifecycle stale

@k8s-ci-robot k8s-ci-robot added the lifecycle/stale Denotes an issue or PR has remained open with no activity and has become stale. label Nov 11, 2019
@fejta-bot
Copy link

Stale issues rot after 30d of inactivity.
Mark the issue as fresh with /remove-lifecycle rotten.
Rotten issues close after an additional 30d of inactivity.

If this issue is safe to close now please do so with /close.

Send feedback to sig-testing, kubernetes/test-infra and/or fejta.
/lifecycle rotten

@k8s-ci-robot k8s-ci-robot added lifecycle/rotten Denotes an issue or PR that has aged beyond stale and will be auto-closed. and removed lifecycle/stale Denotes an issue or PR has remained open with no activity and has become stale. labels Dec 11, 2019
@fejta-bot
Copy link

Rotten issues close after 30d of inactivity.
Reopen the issue with /reopen.
Mark the issue as fresh with /remove-lifecycle rotten.

Send feedback to sig-testing, kubernetes/test-infra and/or fejta.
/close

@k8s-ci-robot
Copy link
Contributor

@fejta-bot: Closing this issue.

In response to this:

Rotten issues close after 30d of inactivity.
Reopen the issue with /reopen.
Mark the issue as fresh with /remove-lifecycle rotten.

Send feedback to sig-testing, kubernetes/test-infra and/or fejta.
/close

Instructions for interacting with me using PR comments are available here. If you have questions or suggestions related to my behavior, please file an issue against the kubernetes/test-infra repository.

Sign up for free to join this conversation on GitHub. Already have an account? Sign in to comment
Labels
lifecycle/rotten Denotes an issue or PR that has aged beyond stale and will be auto-closed.
Projects
None yet
Development

No branches or pull requests

4 participants