Skip to content
New issue

Have a question about this project? Sign up for a free GitHub account to open an issue and contact its maintainers and the community.

By clicking “Sign up for GitHub”, you agree to our terms of service and privacy statement. We’ll occasionally send you account related emails.

Already on GitHub? Sign in to your account

[Bug]: Oauth Proxy failing on some installations #1486

Closed
1 task done
lucferbux opened this issue Jul 7, 2023 · 9 comments
Closed
1 task done

[Bug]: Oauth Proxy failing on some installations #1486

lucferbux opened this issue Jul 7, 2023 · 9 comments
Labels
infrastructure Anything non feature/* related that improves general working of the Dashboard kind/bug Something isn't working needs-info Further information is requested from the reporter or from another source priority/normal An issue with the product; fix when possible

Comments

@lucferbux
Copy link
Contributor

Is there an existing issue for this?

  • I have searched the existing issues

Deploy type

Using an OpenDataHub main version (eg. v1.6.0)

Version

1.7.0

Current Behavior

When installing the dashboard through a default installation, when trying to access through oauth, we get this error:

Screenshot 2023-07-07 at 15 48 02

Looking at the oauth proxy's logs, we have this error message:

2023/07/07 13:47:51 provider.go:587: Performing OAuth discovery against https://172.30.0.1/.well-known/oauth-authorization-server
2023/07/07 13:47:51 provider.go:627: 200 GET https://172.30.0.1/.well-known/oauth-authorization-server {
  "issuer": "https://oauth-openshift.apps.ods-qe-04.rhods.ccitredhat.com",
  "authorization_endpoint": "https://oauth-openshift.apps.ods-qe-04.rhods.ccitredhat.com/oauth/authorize",
  "token_endpoint": "https://oauth-openshift.apps.ods-qe-04.rhods.ccitredhat.com/oauth/token",
  "scopes_supported": [
    "user:check-access",
    "user:full",
    "user:info",
    "user:list-projects",
    "user:list-scoped-projects"
  ],
  "response_types_supported": [
    "code",
    "token"
  ],
  "grant_types_supported": [
    "authorization_code",
    "implicit"
  ],
  "code_challenge_methods_supported": [
    "plain",
    "S256"
  ]
}
2023/07/07 13:47:51 oauthproxy.go:656: error redeeming code (client:10.129.2.1:45526): got 400 from "https://oauth-openshift.apps.ods-qe-04.rhods.ccitredhat.com/oauth/token" {"error":"unauthorized_client","error_description":"The client is not authorized to request a token using this method."}
2023/07/07 13:47:51 oauthproxy.go:445: ErrorPage 500 Internal Error Internal Error

Expected Behavior

Log in successfully in the dashboard.

Steps To Reproduce

  1. Install the dashboard through its kfdef
  2. Try to log in

Workaround (if any)

No response

What browsers are you seeing the problem on?

No response

Anything else

No response

@lucferbux lucferbux added kind/bug Something isn't working untriaged Indicates the newly create issue has not been triaged yet priority/normal An issue with the product; fix when possible labels Jul 7, 2023
@shalberd
Copy link
Contributor

shalberd commented Jul 7, 2023

interesting, with which version of ose-oauth-proxy sidecar?
Possible indication, in another project and ticket also using ose-oauth:

openshift/oauth-proxy#95 (comment)

They posited there:

"You should make sure that the SA on the deployment and config are set and in agreement

If you leave the deployment spec serviceAccountName out, by default the pod may not be running with the same SA and you would get this error."

Maybe the oauth sidecar pod serviceaccountname and the oauth argument --openshift-service-account do not have the same value?"

Mmh, well, the dashboard pod has the correct odh-dashboard serviceaccountname set.

spec:
  replicas: 2
  selector:
    matchLabels:
      app: odh-dashboard
      app.kubernetes.io/part-of: odh-dashboard
      deployment: odh-dashboard
  template:
    metadata:
      creationTimestamp: null
      labels:
        app: odh-dashboard
        app.kubernetes.io/part-of: odh-dashboard
        deployment: odh-dashboard
    spec:
      restartPolicy: Always
      serviceAccountName: odh-dashboard

I don't see that ose-oauth argument named

--openshift-service-account, though.

May not be necessary, as I don't get that error on my cluster with v4.10.

   args:
            - '--https-address=:8443'
            - '--provider=openshift'
            - '--upstream=http://localhost:8080'
            - '--tls-cert=/etc/tls/private/tls.crt'
            - '--tls-key=/etc/tls/private/tls.key'
            - '--client-id=dashboard-oauth-client'
            - '--client-secret-file=/etc/oauth/client/secret'
            - '--scope=user:full'
            - '--cookie-secret-file=/etc/oauth/config/cookie_secret'
            - '--cookie-expire=23h0m0s'
            - '--pass-access-token'
            - >-
              --openshift-delegate-urls={"/": {"resource": "services", "verb":
              "get", "name": "odh-dashboard", "namespace": "$(NAMESPACE)"}}
            - '--skip-auth-regex=^/metrics'

It is probably more permission-related, not an ose-oauth argument issue.

@lucferbux
Copy link
Contributor Author

interesting, with which version of ose-oauth-proxy sidecar?

It's on v4.8, I've tried to upload it to v4.10, but it's failing too, I'm checking if all the permissions are ok.

@lucferbux
Copy link
Contributor Author

lucferbux commented Jul 10, 2023

Ok, thanks to @cam-garrison and @bartoszmajsak we know why this is happening:

  1. The secret dashboard-oauth-client-generated which is generated by the secret dashboard-oauth-client doesn't match with the one in the oauth client dashboard-oauth-client

  2. You can check this first getting the secret that the oauth client is expecting with:

    oc get oauthclient.oauth.openshift.io dashboard-oauth-client -o json | jq .secret

  3. Now, get the secret generated:

    oc get secret dashboard-oauth-client-generated --template={{.data.secret}} | base64 -D

These values should not match.

In the oauth client, if you change the secret field to the decoded value of dashboard-oauth-client-generated, should work just as expected.

We've only found what's happening here, but we haven't found the root of this issue, I think there's something wrong with the process of generating the secret.

@pnaik1 pnaik1 added infrastructure Anything non feature/* related that improves general working of the Dashboard priority/high Important issue that needs to be resolved asap. Releases should not have too many of these. and removed untriaged Indicates the newly create issue has not been triaged yet priority/normal An issue with the product; fix when possible labels Jul 10, 2023
@pnaik1 pnaik1 changed the title [Bug]: Oauht Proxy failing on some installations [Bug]: Oauth Proxy failing on some installations Jul 10, 2023
@lucferbux lucferbux added needs-info Further information is requested from the reporter or from another source priority/normal An issue with the product; fix when possible and removed priority/high Important issue that needs to be resolved asap. Releases should not have too many of these. labels Jul 10, 2023
@lucferbux
Copy link
Contributor Author

lucferbux commented Jul 10, 2023

@VaishnaviHire @zdtsw @etirelli I think this is related to the secret generator I'm not able to reproduce it all the time but I've seen this happen a few times already, we might wanna take a look.

@lucferbux
Copy link
Contributor Author

cc @andrewballantyne

@lucferbux
Copy link
Contributor Author

based on an internal conversation with @VaishnaviHire it seems that this is not related to any issue to our deployment, it might be some misconfiguration on clusters.
Closing this issue.

@andrewballantyne
Copy link
Member

@asanzgom
Copy link

Reopening the issue as the bug is still reproducible on v2.4 RC2:

https://redhat-internal.slack.com/archives/C05NXTEHLGY/p1700135335630479?thread_ts=1700121604.676009&cid=C05NXTEHLGY

@andrewballantyne
Copy link
Member

@asanzgom This issue should remain closed -- all conversations should be taken to opendatahub-io/opendatahub-operator#712

This issue is more than likely unrelated to our repo -- but if it turns out we need to fix something, we can log something new and address that as-is. This issue was closed because it was not our repo.

Changing this from success close to not planned closed.

@andrewballantyne andrewballantyne closed this as not planned Won't fix, can't repro, duplicate, stale Nov 16, 2023
Sign up for free to join this conversation on GitHub. Already have an account? Sign in to comment
Labels
infrastructure Anything non feature/* related that improves general working of the Dashboard kind/bug Something isn't working needs-info Further information is requested from the reporter or from another source priority/normal An issue with the product; fix when possible
Projects
Status: No status
Archived in project
Status: No status
Development

No branches or pull requests

5 participants