Skip to content
New issue

Have a question about this project? Sign up for a free GitHub account to open an issue and contact its maintainers and the community.

By clicking “Sign up for GitHub”, you agree to our terms of service and privacy statement. We’ll occasionally send you account related emails.

Already on GitHub? Sign in to your account

Unauthorized error when creating a job #163

Closed
OberstHorst opened this issue Dec 29, 2017 · 12 comments
Closed

Unauthorized error when creating a job #163

OberstHorst opened this issue Dec 29, 2017 · 12 comments
Labels
lifecycle/rotten Denotes an issue or PR that has aged beyond stale and will be auto-closed.

Comments

@OberstHorst
Copy link

Hey,

i want to create Job using the Java Kubernetes API. My tool runs fine when using a minikube cluster, but fails if i try to execute it against a (testing) Kubernetes cluster that was deployed using ubuntu conjure-up. The cluster has been modified to deny anonymous access and uses RBAC and Node authorization and basic auth authentication. i can use kubectl create -f foo.yaml to create a job, when i try to create a job using the Java API it returns:

io.kubernetes.client.ApiException: Unauthorized
        at io.kubernetes.client.ApiClient.handleResponse(ApiClient.java:882)
        at io.kubernetes.client.ApiClient.execute(ApiClient.java:798)
        at io.kubernetes.client.apis.BatchV1Api.createNamespacedJobWithHttpInfo(BatchV1Api.java:164)
        at io.kubernetes.client.apis.BatchV1Api.createNamespacedJob(BatchV1Api.java:148)
        at de.jlu.bioinfsys.workflowUI.Runner.CommonKubeRunner.startJob(CommonKubeRunner.java:62)
        at de.jlu.bioinfsys.workflowUI.handler.WorkflowUIHandler.startWorkflow(WorkflowUIHandler.java:64)
        at de.jlu.bioinfsys.workflowUI.workflowUIAPI.WorkflowServerAPI$Processor$startWorkflow.getResult(WorkflowServerAPI.java:546)
        at de.jlu.bioinfsys.workflowUI.workflowUIAPI.WorkflowServerAPI$Processor$startWorkflow.getResult(WorkflowServerAPI.java:531)
        at org.apache.thrift.ProcessFunction.process(ProcessFunction.java:39)
        at org.apache.thrift.TBaseProcessor.process(TBaseProcessor.java:39)
        at org.apache.thrift.server.TThreadPoolServer$WorkerProcess.run(TThreadPoolServer.java:286)
        at java.util.concurrent.ThreadPoolExecutor.runWorker(ThreadPoolExecutor.java:1149)
        at java.util.concurrent.ThreadPoolExecutor$Worker.run(ThreadPoolExecutor.java:624)
        at java.lang.Thread.run(Thread.java:748)

I tried this with a custom user and the default admin user with the same result. Both were able to create jobs using kubectl. The config is read correctly when i force parsing by using KubeConfig.loadDefaultKubeConfig().

This is how a build my client:

        ApiClient client = null;
        try {
            client = Config.defaultClient();
        } catch (IOException e) {
            e.printStackTrace();
        }
        Configuration.setDefaultApiClient(client);

        this.batchApiInstance = new BatchV1Api();

Cheers and thanks in advance

@brendandburns
Copy link
Contributor

Can you print getResponseBody() from the APIException that gets thrown, and copy the contents into this issue?

The error body should have more details about what is going wrong.

Thanks!

@OberstHorst
Copy link
Author

OberstHorst commented Dec 30, 2017

Here is the Response body:

{"kind":"Status","apiVersion":"v1","metadata":{},"status":"Failure","message":"Unauthorized","reason":"Unauthorized","code":401}

Cheers

@brendandburns
Copy link
Contributor

@OberstHorst Thanks, sadly there's no real information in there after all...

I will try to reproduce this on my local machine and debug from there.

@OberstHorst
Copy link
Author

OberstHorst commented Jan 1, 2018

@brendandburns I forgot to mention that i am using v1.0.0beta1 from the maven repository. I was able to dig a bit deeper. If you need some debug output, just let me know. I am currently stepping a bit through the call structure, maybe i find something. Cheers and happy new year.

@OberstHorst
Copy link
Author

OberstHorst commented Jan 2, 2018

I was able to identify the problem: The BatchV1API class only adds BearerToken authentications to the HTTP header for authentication. I fixed it by replacing String[] localVarAuthNames = new String[] { "BearerToken" }; with String[] localVarAuthNames = new String[] { "BasicAuth" };. This is of course not a real fix but only a workaround for me. Maybe this Array should be generated based on what has been found in the config file?

@brendandburns
Copy link
Contributor

Thanks for digging into this.

Ultimatley, I think this is because the kubernetes openapi spec doesn't define an authMethods section, and thus the auth we get is the defaults. It's a little weird because basic is present in ApiClient.java but not any of the other generated classes. Not really sure why...

@mbohlool should we have a security section in our open api spec?

@mbohlool
Copy link
Contributor

mbohlool commented Jan 3, 2018

At the time that OpenAPI spec generator developed, there were not enough information to add a security section. We can revisit that.

@l15k4
Copy link

l15k4 commented Nov 12, 2018

I have the same problem :

  • I created a Role with permissions container.jobs.*
  • associated that Role with a service account
  • used that service-account token this way :
    Config.fromToken(
      "https://1.2.3.4",
      "...",
      false
    )

but creating a job returns 401 Unauthorized ... My cluster version is 1.11, kubernetes-client 3.0.0

However doing the same thing like this Config.fromConfig("/home/ubuntu/.kube/config") works ...

@fejta-bot
Copy link

Issues go stale after 90d of inactivity.
Mark the issue as fresh with /remove-lifecycle stale.
Stale issues rot after an additional 30d of inactivity and eventually close.

If this issue is safe to close now please do so with /close.

Send feedback to sig-testing, kubernetes/test-infra and/or fejta.
/lifecycle stale

@k8s-ci-robot k8s-ci-robot added the lifecycle/stale Denotes an issue or PR has remained open with no activity and has become stale. label Apr 27, 2019
@fejta-bot
Copy link

Stale issues rot after 30d of inactivity.
Mark the issue as fresh with /remove-lifecycle rotten.
Rotten issues close after an additional 30d of inactivity.

If this issue is safe to close now please do so with /close.

Send feedback to sig-testing, kubernetes/test-infra and/or fejta.
/lifecycle rotten

@k8s-ci-robot k8s-ci-robot added lifecycle/rotten Denotes an issue or PR that has aged beyond stale and will be auto-closed. and removed lifecycle/stale Denotes an issue or PR has remained open with no activity and has become stale. labels May 27, 2019
@fejta-bot
Copy link

Rotten issues close after 30d of inactivity.
Reopen the issue with /reopen.
Mark the issue as fresh with /remove-lifecycle rotten.

Send feedback to sig-testing, kubernetes/test-infra and/or fejta.
/close

@k8s-ci-robot
Copy link
Contributor

@fejta-bot: Closing this issue.

In response to this:

Rotten issues close after 30d of inactivity.
Reopen the issue with /reopen.
Mark the issue as fresh with /remove-lifecycle rotten.

Send feedback to sig-testing, kubernetes/test-infra and/or fejta.
/close

Instructions for interacting with me using PR comments are available here. If you have questions or suggestions related to my behavior, please file an issue against the kubernetes/test-infra repository.

Sign up for free to join this conversation on GitHub. Already have an account? Sign in to comment
Labels
lifecycle/rotten Denotes an issue or PR that has aged beyond stale and will be auto-closed.
Projects
None yet
Development

No branches or pull requests

6 participants