-
Notifications
You must be signed in to change notification settings - Fork 885
New issue
Have a question about this project? Sign up for a free GitHub account to open an issue and contact its maintainers and the community.
By clicking “Sign up for GitHub”, you agree to our terms of service and privacy statement. We’ll occasionally send you account related emails.
Already on GitHub? Sign in to your account
Adding oauth2-proxy as optional alternative to oidc-authservice #2409
Conversation
/priority p1 |
@jbottum: The label(s) In response to this:
Instructions for interacting with me using PR comments are available here. If you have questions or suggestions related to my behavior, please file an issue against the kubernetes/test-infra repository. |
Hey @axel7083, nice work exposing all this context! I'd split this into the following distinct efforts:
Let's focus this effort on the first one for now, how to mimic AuthService's OIDC Client functionality with oauth2-proxy and continue the discussion about Programmatic Clients and Group Info in distinct issues. So let's remove authservice and just keep oauth2-proxy in this PR. Indeed we'll also need to handle the logout URL in the CentralDashboard, after this change. |
@kimwnasptd I removed the dependencies on Some informationI tested it on a minikube cluster. I only used a port-forward during the test. To make it behing a real domain the following elements should be modified: |
There was a problem hiding this comment.
Choose a reason for hiding this comment
The reason will be displayed to describe this comment to others. Learn more.
Thanks for your work @axel7083!
Some high level comments:
- Let's rename the folder to
common/oauth2-proxy
to be super clear to users about the component - Let's not remove the
oidc-authservice
yet, but leave them both for this release with authservice as the default
README.md
Outdated
|
||
```sh | ||
kustomize build common/oidc-authservice/base | kubectl apply -f - | ||
kustomize build common/auth-proxy/overlays/oauth-proxy | kubectl apply -f - |
There was a problem hiding this comment.
Choose a reason for hiding this comment
The reason will be displayed to describe this comment to others. Learn more.
Let's leave the oidc-authservice
there by default for this release and we can make the migration in the next release to only oauth2-proxy.
But we could do something like this there:
kustomize build common/oidc-authservice/base | kubectl apply -f -
# kustomize build common/auth-proxy/overlays/oauth-proxy | kubectl apply -f -
example/kustomization.yaml
Outdated
@@ -39,8 +39,8 @@ resources: | |||
- ../common/istio-1-16/istio-crds/base | |||
- ../common/istio-1-16/istio-namespace/base | |||
- ../common/istio-1-16/istio-install/base | |||
# OIDC Authservice | |||
- ../common/oidc-authservice/base | |||
# oauth |
There was a problem hiding this comment.
Choose a reason for hiding this comment
The reason will be displayed to describe this comment to others. Learn more.
as mentioned in a comment above, let's keep authservice as the default just for this release
Oh and the most important one, don't forget to have an OWNERS file with yourself as an approver for the |
@axel7083 the default way is currently not username and password but the oidc-authservice serviceaccounttoken method. Since Kubeflow 1.7 this is officially supported. You just have to send a header with the Serviceaccount/bearer token and you are authenticated. I can provide a one liner to do this, if needed. |
Thanks, but I already know how to use ServiceAccount token. The need for oauth2-proxy is beyond that
|
Hi @axel7083, can you follow the steps to join the Kubeflow org? You can't be added to the OWNERS file without being an org member. If you need a sponsor, happy to do it |
/verify-owners |
@axel7083 thanks for the great work! I'll take a final look the next days. @axel7083 would you be available to join the next Manifests WG call on 7th of Sept? You can subscribe to the community calendar to view these events https://www.kubeflow.org/docs/about/community/#kubeflow-community-call cc @annajung @juliusvonkohout @ to also take a peek |
Hey @axel7083 - have you seen this issue? Looks like oauth2-proxy might not actually support multiple providers yet, even though the config lets you define multiple providers. I don't think this needs to be a blocker for this PR - oauth2-proxy still provides a path for better programmatic access from outside the cluster and exposing group info so this change has huge value. |
Hey ! Yes I already know, I noticed that later when I was conducting some testing, and forgot to mention it here, so thanks you for reminding it ! The PR for multi tenancy feature is open and being discussed ( a bit slow since the owner of oauth2-proxy is managing it on its free time) oauth2-proxy/oauth2-proxy#1923 so probably something we will see in the future (really hope so.) As you said, and I still agree, the benefits of having oauth2-proxy are still interesting.
I am not sure if I will available on the 7 of September for now, my calendar is not well defined and mostly not under my control for the next few weeks. I will add it to my calendar and will try to join. That could be a very interesting call. |
Hey @axel7083, I see we were working on a very similar contribution in parallel! I also wanted to add support for oauth2-proxy but with a little different configuration. I'd like to describe the differences in the alternative, provide some arguments in favor and open a discussion if you don't mind. I think there might be some value in combining our work here. oauth2-proxy configI have two points:
https://oauth2-proxy.github.io/oauth2-proxy/docs/configuration/overview/ Istio mesh config vs EnvoyFilterIt is described in Istio Issue Better External Authorization support and in the Istio docs External Authorization that there is a new, improved and recommended option to configure auth with mesh config using I have experience in this exact setup because I implemented it in multiple Kubeflow environments that I administrate and develop. I'll go through the differences: envoyExtAuthzHttp extension instead of EnvoyFilterTo configure the external authorizer with mesh: |-
extensionProviders:
- name: oauth2-proxy
envoyExtAuthzHttp:
service: oauth2-proxy.oauth2-proxy.svc.cluster.local
port: 8080
headersToDownstreamOnDeny:
- content-type
- set-cookie
headersToUpstreamOnAllow:
- authorization
- path
- x-auth-request-email
- x-auth-request-groups
- x-auth-request-user
- x-auth-request-user-groups
includeRequestHeadersInCheck:
- authorization
- cookie This configuration is very similar to the EnvoyFilter but it doesn't add the From the end goal perspective which is providing the auth headers it doesn't make a difference if it's oauth2-proxy or istio that's adding the header but I think it makes better sense to use Istio for that because this way we can keep the oauth2-proxy configuration simple and not too Kubeflow specific. From istio perspective, To follow the current convention in the repo of configuring istio with meshConfig:
extensionProviders:
- name: oauth2-proxy
envoyExtAuthzHttp:
service: oauth2-proxy.oauth2-proxy.svc.cluster.local
port: 8080
headersToDownstreamOnDeny:
- content-type
- set-cookie
headersToUpstreamOnAllow:
- authorization
- path
- x-auth-request-email
- x-auth-request-groups
- x-auth-request-user
- x-auth-request-user-groups
includeRequestHeadersInCheck:
- authorization
- cookie I was thinking about what could be the easiest way of providing this configuration to istio and I think we could either create a dedicated istio directory with separate apiVersion: v1
kind: ConfigMap
metadata:
name: istio
namespace: istio-system
data:
# Configuration file for the mesh networks to be used by the Split Horizon EDS.
mesh: |-
accessLogFile: /dev/stdout
defaultConfig:
discoveryAddress: istiod.istio-system.svc:15012
proxyMetadata: {}
tracing: {}
enablePrometheusMerge: true
rootNamespace: istio-system
tcpKeepalive:
interval: 5s
probes: 3
time: 10s
trustDomain: cluster.local
extensionProviders:
- name: oauth2-proxy
envoyExtAuthzHttp:
service: oauth2-proxy.oauth2-proxy.svc.cluster.local
port: 8080
headersToDownstreamOnDeny:
- content-type
- set-cookie
headersToUpstreamOnAllow: # x-auth-* headers are optional if set with RequestAuthentication.spec.jwtRules.outputClaimToHeaders
- authorization
- path
- x-auth-request-email
- x-auth-request-groups
- x-auth-request-user
- x-auth-request-user-groups
includeRequestHeadersInCheck:
- authorization
- cookie RequestAuthentication to trust and use jwtThis is how we tell Istio that if a JWT is present in the request and is issued by dex, it should be trusted and possible to use with apiVersion: security.istio.io/v1beta1
kind: RequestAuthentication
metadata:
name: dex-jwt
namespace: istio-system
spec:
jwtRules:
- forwardOriginalToken: true
issuer: http://dex.auth.svc.cluster.local:5556/dex
outputClaimToHeaders:
- header: kubeflow-userid
claim: email
- header: kubeflow-groups
claim: groups Istio JWT RefreshFrom my experience it's also worth providing this patch to the apiVersion: apps/v1
kind: Deployment
metadata:
name: istiod
namespace: istio-system
spec:
template:
spec:
containers:
- name: discovery
env:
- name: PILOT_JWT_PUB_KEY_REFRESH_INTERVAL
value: "1m" This is because when we instruct Istio to trust dex issuer with
If above situation were to happen, a user would see the following error when trying to access the page after authentication with dex:
Clearing the cookies helps. This patch could be a part of Delegating auth to oauth2-proxyTo delegate the access control to an external authorization system, an instance of To configure apiVersion: security.istio.io/v1beta1
kind: AuthorizationPolicy
metadata:
name: oauth2-proxy-istio-ingressgateway
namespace: istio-system
spec:
action: CUSTOM
provider:
name: oauth2-proxy
selector:
matchLabels:
app: istio-ingressgateway
rules:
- {} This could be added to Bonus: CloudFlareIf you're running you Kubeflow instance behind CloudFlare, you will probably want to disable auth for some static web page resources like favicon or assets as CloudFlare will try to cache them. This configuration can be used in such scenario: apiVersion: security.istio.io/v1beta1
kind: AuthorizationPolicy
metadata:
name: oauth2-proxy-istio-ingressgateway
namespace: istio-system
spec:
action: CUSTOM
provider:
name: oauth2-proxy
selector:
matchLabels:
app: istio-ingressgateway
rules:
- to:
- operation:
notPaths:
- /favicon*
- /webcomponentsjs*
- /vendor.bundle.js
- /app.bundle.js
- /dashboard_lib.bundle.js
- /assets*
- /app.css Programmatic clientsI understand that support for programmatic clients in this setup is something planned for the next steps but since I have this already worked out, I'd like to describe what would be the next steps needed in order to enable it in a way that complies with the configuration I described above. In a summary, oauth2-proxy must be configured to skip auth for requests that have verified jwt bearer tokens and then istio must be configured to trust this jwt and extract user information to the header. This jwt can but doesn't have to be issued by the kubernetes cluster. If it is, tokens can be created using oauth2-proxyThis must be added into configuration:
IstioWhen oauth2-proxy skips the auth, content of "authorization" headers is passed to istio and then we can use
This could be added to Usage exampleWith the config in setup, assuming k8s is configured with some IdP, following commands can be executed to test the m2m flow: $ TOKEN="$(kubectl create token default-editor)"
$ curl https://kubeflow.example.com/ -H "Authorization: Bearer $TOKEN"
# web page is content is returned instead of auth redirect
$ curl https://kubeflow.example.com/api/workgroup/env-info -H "Authorization: Bearer $TOKEN"
{"user":"system:serviceaccount:kubeflow-user-example-com:default-editor","platform":{"kubeflowVersion":"unknown","provider":"other://","providerName":"other","logoutUrl":"/logout"},"namespaces":[],"isClusterAdmin":false} NoteIt's important to understand that if the issuer is served behind Let me know what you think! Also, I have some spare time if you'd like some help on the PR. |
Hey @kromanow94 ! Thanks for reaching out, really appreciate all the details and work ! Let's dive into it.
This PR aims to mimic the behavior of the oidc-authservice. I knew using more in depth the Istio Configuration could potentially be better, but I have not look into it, so thanks you for this amazing work!
I totally get the point here, and agree.
Thanks for noticing! This can be fix right away, I will run some tests to ensure everything run properly.
I really like the approach you are taking, relying more on Istio than the oauth2-proxy, (Istio really have a better support anyway). As I stated in the begin of this comments, this PR aims to mimic the existing behavior, offering an alternative. @kimwnasptd what do you think ? This is probably worth a try, but maybe in another PR ? He did a great job, no sure if it should be integrated in this PR or later ? EditI will try it in a cluster next week to see it in more details :) |
Hey @axel7083 , thanks for the positive feedback, I’m really glad you like it! :) Sure, let’s see what the community thinks. If the decision would be to go with separate PR I can author it or help in the process, whatever is preferred. |
@kromanow94 I really like your approach and either @axel7083 adds you as collaborator to his repository or the other way around. This way you can work together, as I often do. You can see this way of collaboration in #2455 for example. As you can see in #2455 we will have multiple istios for some time, so please use a kustomize component instead of an overlay. This component can probably live in the oauth2 related directory as well. We must be sure that the jwt issued by kubernetes is checked for the correct signature, such that you cannot fake it. I assume that is handled automatically, but please confirm. Furthermore we must somehow automatically import the cluster certificates, whether self signed or not. As far as I know these certificates are mounted by default into each pod on a normal kubernetes. Probably you just have to tell Istio to use them as well. I can definitely guide regarding the repository structure and kubernetes. |
@juliusvonkohout thanks :). I assume from your comment that the changes I mentioned above should be a part of this PR. About the cluster certificates: you're right, it's in k8s docs that the ServiceAccount admission controller adds a projected volume with the cert: https://kubernetes.io/docs/reference/access-authn-authz/service-accounts-admin/#bound-service-account-token-volume. With this we can rely on that the certs will be always available in Also, we can take the issuer url from inside the token value in Using import jwt
with open("/run/secrets/kubernetes.io/serviceaccount/token", "rb") as token_file:
token = token_file.read().decode()
issuer = jwt.decode(token, options={"verify_signature": False})["iss"] # don't verify because of self-signed issuer cert
Both oauth2-proxy and istiod had complaints when the certs were not trusted and it always rendered the setup unusable so I think it's safe to assume that the signature is verified. If you need better argument I can verify additionally.
It would be best if you could provide a directory path that would suit this. @axel7083 how do you feel about adding me as collaborator to your repository? |
"Also, we can take the issuer url from inside the token value in /run/secrets/kubernetes.io/serviceaccount/token (this is always available unless ServiceAccount.automountServiceAccountToken is false). Then this could be used in some k8s job to create an instance of RequestAuthentication with the issuer url taken from token and outputClaimToHeaders. @juliusvonkohout does that seem like a viable option? Is there another way of automating that you have in mind?" We can use something similar to this to enforce mounting
it is used a lot in KFP. |
@juliusvonkohout sure, why not. It was more of a question of if you see having a K8s Job in |
Thanks for the detailed explanations @kromanow94, this is really good technical context and insights! As next steps I want to suggest the following:
Again the reasoning for the above is to also slightly organize the knowledge and discussion better. Although I'd expect the context to be a little bit intertwined, which we can keep discussing in the calls as well. @axel7083 @kromanow94 WDYT? |
Thanks for the reply, and yes I agree on your opinion for the steps to follow, this PR is kinda getting old and have a lot of elements, discussion and comments, merging it, and working on a new one is probably better ! |
Sounds good. FYI, discussion on istio slack about using Kubernetes OIDC: https://istio.slack.com/archives/C3TEGNZ7W/p1693320167853579 |
@axel7083 thanks again for your time on this, and really looking forward to next steps! @axel7083 @kromanow94 @juliusvonkohout Let's discuss the future fun items in the new issues :) /lgtm |
[APPROVALNOTIFIER] This PR is APPROVED This pull-request has been approved by: axel7083, kimwnasptd The full list of commands accepted by this bot can be found here. The pull request process is described here
Needs approval from an approver in each of these files:
Approvers can indicate their approval by writing |
Description
OAuth2-proxy allows to have multiple providers1 for authorizing incoming requests. It can be used to add additional providers while keeping the default Dex configuration. For example it can validate or invalidate JWT tokens in the Authorization header, extract any claims from them and map them to upstream headers.
It is possible to mimic the exact behavior of the oidc-authservice, since oauth2-proxy can respond static response2 with filtered headers3 with a code depending on the outcome of the authorization.
Use case
Easier "personal access token"
Simplifying the kubeflow/pipeline access from outside the cluster when the full kubeflow is deployed. Currently the official way in the documentation is to use the username/password to create a session4 which then can be saved, but the cookies are usually not long terms, so they cannot be used properly in CI/CD pipelines.
Having the oauth2-proxy offer alternatives and tools for developers to interact in a simpler way with the kubeflow resources.
The Istio AuthorizationPolicy has a build-in support for claims
Currently for each Profile, and user to access a notebook instance (like a Jupyter Notebook), we need to adapt the istio AuthorizationPolicy.
This is not very scalable for lot of users. In the context of supporting groups: The oidc-authservice provide the
kubeflow-groups
header, which is a string with user's groups separated with commas. Istio is not capable of parsing that.The current best way to allows groups in the format provided by the oidc-authservice5 is to something hacky like allowing all the possible values for a group.
Let's take an example: the
request.header[kubeflow-groups] = group1,admin,group2
. Since we cannot know the order, we have to allows the following valuesadmin
(if only one group is provided),*,admin
(if the group is at the end),*,admin,*
(if the group is in the middle of others),admin,*
(at the begining).However, as it is shown in their documentation6 they support verifying JWT authorization claims. So, having the oatuh2-proxy as the EnvoyFilter filter, would provide the
kubeflow-groups
and a valid JWT token that the Istio would be capable of parsing and extract claimsThe hacky stuff could be simply replaced with the following:
Possible issues
Currently the logout URL is hardcoded in the central dashboard and need to be changed to
/oauth2/logout
. The best solution would be to set inside a config-map the endpoint so it can be easily changed. A simple fix for now is to create aVirtualService
which take the /logout url and rewrite it to /oauth2/logout.The default oatuh2-proxy configuration send the JWT token to the client, in this context the performance could be impacted, and add a bit of redundancy since the oauth2-proxy would first parse the JWT token, then add the required headers (
kubeflow-userid
andkubeflow-groups
) then will forward the request with the jwt token, and istio will parse it again. A simple fix, could be to change the default oauth2-proxy configuration and add a Redis server for it to cache the token.Footnotes
In the alpha-config of oauth2-proxy you can define multiple providers. ↩
See the upstreams-configuration in their documentation. ↩
The
injectResponseHeaders
options allows to extract claims from the token to put them in the response, which can then be forward to the kubeflow services by istio. See the alpha-config for details. ↩Connect the Pipelines SDK to Kubeflow Pipelines ↩
oidc-authservice ↩
Istio authz-jwt ↩