Skip to content
New issue

Have a question about this project? Sign up for a free GitHub account to open an issue and contact its maintainers and the community.

By clicking “Sign up for GitHub”, you agree to our terms of service and privacy statement. We’ll occasionally send you account related emails.

Already on GitHub? Sign in to your account

[bug] Jwt issuer is not configured #2840

Closed
jaffe-fly opened this issue Jul 22, 2024 · 10 comments
Closed

[bug] Jwt issuer is not configured #2840

jaffe-fly opened this issue Jul 22, 2024 · 10 comments
Labels

Comments

@jaffe-fly
Copy link

jaffe-fly commented Jul 22, 2024

Environment

k8s version v1.29.5

  • How do you deploy Kubeflow Pipelines (KFP)?
    use kubeflow manifests deploy 1master branch
    git log -1
commit 473b1035304f847063ecaf0a44686182c437db64 (HEAD -> master, origin/v1.9-branch, origin/v1.10-branchorigin/v1.9-branch, origin/v1.10-branch, origin/master, origin/HEAD)
Author: Krzysztof Romanowski <krzysztof.romanowski94@gmail.com>
Date:   Thu Jul 11 14:10:21 2024 +0200

    Fix ml pipeline access from kfp step (#2795)

    * fail gh action if pipeline failed in .github/workflows/pipeline_test.yaml

    Signed-off-by: Krzysztof Romanowski <krzysztof.romanowski.kr1@roche.com>

    * allow access to ml-pipeline when using trusted requestPrincipal or doesn't have auth header

    Signed-off-by: Krzysztof Romanowski <krzysztof.romanowski.kr1@roche.com>

    * add more triggers for the workflow

    Signed-off-by: juliusvonkohout <45896133+juliusvonkohout@users.noreply.github.com>

    ---------

    Signed-off-by: Krzysztof Romanowski <krzysztof.romanowski.kr1@roche.com>
    Signed-off-by: juliusvonkohout <45896133+juliusvonkohout@users.noreply.github.com>
    Co-authored-by: Krzysztof Romanowski <krzysztof.romanowski.kr1@roche.com>
    Co-authored-by: juliusvonkohout <45896133+juliusvonkohout@users.noreply.github.com>
  • KFP version:
    2.2.0
  • KFP SDK version:
    2.7.0
    o

Steps to reproduce

follow this link https://www.kubeflow.org/docs/components/pipelines/user-guides/core-functions/connect-api/#full-kubeflow-subfrom-inside-clustersub

in my-profile namespace of kubeflow,and in notebook configurations have add pipeline access token,
code is

from kfp import dsl
client = Client()
print(client.list_experiments(namespace="my-profile"))

get following errpr:

kfp_server_api.exceptions.ApiException: (401)
Reason: Unauthorized
HTTP response headers: HTTPHeaderDict({'www-authenticate': 'Bearer realm="http://ml-pipeline.kubeflow.svc.cluster.local:8888/apis/v2beta1/healthz", error="invalid_token"', 'content-length': '28', 'content-type': 'text/plain', 'date': 'Sat, 13 Jul 2024 15:34:54 GMT', 'server': 'envoy', 'x-envoy-upstream-service-time': '1'})
HTTP response body: Jwt issuer is not configured

my poddefault is

apiVersion: kubeflow.org/v1alpha1
kind: PodDefault
metadata:
  name: access-ml-pipeline
  namespace: my-profile
spec:
  desc: Allow access to Kubeflow Pipelines
  selector:
    matchLabels:
      access-ml-pipeline: "true"
  env:
    - ## this environment variable is automatically read by `kfp.Client()`
      ## this is the default value, but we show it here for clarity
      name: KF_PIPELINES_SA_TOKEN_PATH
      value: /var/run/secrets/kubeflow/pipelines/token
  volumes:
    - name: volume-kf-pipeline-token
      projected:
        sources:
          - serviceAccountToken:
              path: token
              expirationSeconds: 7200
              ## defined by the `TOKEN_REVIEW_AUDIENCE` environment variable on the `ml-pipeline` deployment
              audience: pipelines.kubeflow.org
  volumeMounts:
    - mountPath: /var/run/secrets/kubeflow/pipelines
      name: volume-kf-pipeline-token
      readOnly: true

my RoleBinding is

apiVersion: rbac.authorization.k8s.io/v1
kind: RoleBinding
metadata:
  name: allow-my-profile-kubeflow-edit
  ## this RoleBinding is in `namespace-1`, because it grants access to `namespace-1`
  namespace: kubeflow
roleRef:
  apiGroup: rbac.authorization.k8s.io
  kind: ClusterRole
  name: kubeflow-edit
subjects:
  - kind: ServiceAccount
    name: default-editor
    ## the ServiceAccount lives in `namespace-2`
    namespace: my-profile

Expected result

Materials and reference

Labels


Impacted by this bug? Give it a 👍.

@jaffe-fly
Copy link
Author

by my test, in my-profilens notebook,

from kfp import dsl
from kfp.client import Client
import kfp

token=""
filename="/run/secrets/kubeflow/pipelines/token"
with open(filename, 'r') as file:
    token = file.read().rstrip()

print(client.get_kfp_healthz())

will get error

kfp_server_api.exceptions.ApiException: (401)
Reason: Unauthorized
HTTP response headers: HTTPHeaderDict({'www-authenticate': 'Bearer realm="http://ml-pipeline.kubeflow.svc:8888/apis/v2beta1/healthz", error="invalid_token"', 'content-length': '28', 'content-type': 'text/plain', 'date': 'Sun, 28 Jul 2024 04:26:49 GMT', 'server': 'envoy', 'x-envoy-upstream-service-time': '0'})
HTTP response body: Jwt issuer is not configured
client = Client(host="http://ml-pipeline.kubeflow.svc:8888")
print(client.get_kfp_healthz())
# print(client.list_experiments())
our_namespace=client.get_user_namespace()
print(our_namespace)

its ok,this will get

{'multi_user': True}
my-profile
client = Client(host="http://ml-pipeline.kubeflow.svc:8888")
print(client.get_kfp_healthz())
print(client.list_experiments())

will get Jwt issuer is not configurederror

client = Client()
print(client.get_kfp_healthz())

will get Jwt issuer is not configurederror

dont know why

@kimwnasptd
Copy link
Member

@jaffe-fly could you provide some more information about

  1. how you installed Kubeflow
  2. Are you using any of the oauth2-proxy components from upstream manifests https://github.com/kubeflow/manifests/tree/master/common/oidc-client/oauth2-proxy/components

My hunch is that because in 1.9 Istio must be able to parse the JWT tokens in Authorization: Bearer <> headers, you are getting this error because you don't have a RequestAuthorization object in your cluster to tell Istio how to parse JWTs issued by K8s
https://github.com/kubeflow/manifests/blob/v1.9.0/apps/pipeline/upstream/base/installs/multi-user/istio-authorization-config.yaml#L36-L38

@jaffe-fly
Copy link
Author

@jaffe-fly could you provide some more information about

  1. how you installed Kubeflow
  2. Are you using any of the oauth2-proxy components from upstream manifests https://github.com/kubeflow/manifests/tree/master/common/oidc-client/oauth2-proxy/components

My hunch is that because in 1.9 Istio must be able to parse the JWT tokens in Authorization: Bearer <> headers, you are getting this error because you don't have a RequestAuthorization object in your cluster to tell Istio how to parse JWTs issued by K8s https://github.com/kubeflow/manifests/blob/v1.9.0/apps/pipeline/upstream/base/installs/multi-user/istio-authorization-config.yaml#L36-L38

install kubeflow from [manifests](https://github.com/kubeflow/manifests) with Install with a single command,

while ! kustomize build example | kubectl apply -f -; do echo "Retrying to apply resources"; sleep 20; done

# oauth2-proxy

here installed oauth2-proxy

so How should I configure it?

@BreakMode
Copy link

i am having the same issue even after updating AuthorizationPolicy manifest

@juliusvonkohout
Copy link
Member

/transfer manifests

@google-oss-prow google-oss-prow bot transferred this issue from kubeflow/pipelines Aug 15, 2024
@juliusvonkohout
Copy link
Member

Cc @kromanow94

@juliusvonkohout
Copy link
Member

See also #2832

Please try with Kind first as detailed in the readme. And read our internal oauth2-proxy documentation in kubeflow/manifests/common/oauth2-proxy

@JamesRyanATX
Copy link

JamesRyanATX commented Aug 20, 2024

@kimwnasptd was correct, in my case (RKE cluster with a non-compliant OIDC setup).

I was able to resolve this issue by manually adding the JWKS public key for my cluster to the RequestAuthentication manifest for machine-to-machine authentication. The m2m cron job should do this normally.

@thesuperzapper
Copy link
Member

thesuperzapper commented Aug 20, 2024

Hey everyone, I am not sure why a CronJob was ever used for this purpose.

If you want a reliable workaround which uses the JWKS URI of the ClusterAPI directly, please see:

We will implement it in the next patch release.

@juliusvonkohout
Copy link
Member

Closed in favor of #2850

Sign up for free to join this conversation on GitHub. Already have an account? Sign in to comment
Labels
Projects
None yet
Development

No branches or pull requests

6 participants