-
Notifications
You must be signed in to change notification settings - Fork 452
New issue
Have a question about this project? Sign up for a free GitHub account to open an issue and contact its maintainers and the community.
By clicking “Sign up for GitHub”, you agree to our terms of service and privacy statement. We’ll occasionally send you account related emails.
Already on GitHub? Sign in to your account
cluster-wide oauth-proxy settings #343
Conversation
Sgtm. |
There was a problem hiding this comment.
Choose a reason for hiding this comment
The reason will be displayed to describe this comment to others. Learn more.
Looks good to me, only couple of questions.
There is one thing we have to consider that is not described here, if I remember correctly a lot/some of our customers use oauthproxy as well for their applications. I am guessing setting this override their applications set values as well?
|
||
## Proposal | ||
|
||
We will create an oauthproxies.config.openshift.io to ensure that read access can be widely granted without any risk of |
There was a problem hiding this comment.
Choose a reason for hiding this comment
The reason will be displayed to describe this comment to others. Learn more.
Would we need something in the console as well for users to set this easily? I am guessing this will be under the "Global Configuration" tab same as current oauth.config.openshift.io
resource?
There was a problem hiding this comment.
Choose a reason for hiding this comment
The reason will be displayed to describe this comment to others. Learn more.
cc @jhadvig
|
||
The following flags for oauth-proxy are good targets for being consistent for all oauth-proxy instances in the cluster. | ||
If the oauth-proxy does not have one of these flags explicitly set, it will attempt to read the value from oauthproxies.config.openshift.io. | ||
1. -cookie-expire |
There was a problem hiding this comment.
Choose a reason for hiding this comment
The reason will be displayed to describe this comment to others. Learn more.
As far as I understand this setting is then orthogonal to the global access token expiry as per https://docs.openshift.com/container-platform/4.4/authentication/configuring-internal-oauth.html#oauth-configuring-internal-oauth_configuring-internal-oauth, correct?
Does a potential discrepancy with the cookie session timeout and the access token expiry have any ramifications consuming applications have to consider?
There was a problem hiding this comment.
Choose a reason for hiding this comment
The reason will be displayed to describe this comment to others. Learn more.
+1 to make this parameter configurable per oauth-proxy instance. It only affects to the cookie time in the browser and it doesn't necessarily has to be aligned with the validity of the token provided in the CLI or any other web based application which also uses oauth-proxy.
There was a problem hiding this comment.
Choose a reason for hiding this comment
The reason will be displayed to describe this comment to others. Learn more.
I didn't mean to express a wish for per-instance configuration but rather if a discrepancy between a global cookie-expire
timeout and a global access token expiry is a problem at all.
i see the following cases:
- cookie expiry is shorter than access token expiry: in that case a re-login will be enforced as the initial (cookie-encrypted) access token is lost.
- cookie expiry is longer than access token expiry: in that case a re-login will also be enforced as the access token expired.
I.e. from a user perspective the actual "session timeout" is min(cookie-expiry, access token expiry)
correct?
There was a problem hiding this comment.
Choose a reason for hiding this comment
The reason will be displayed to describe this comment to others. Learn more.
The enhancement does not block per-instance configuration.
cookie expiry is shorter than access token expiry: in that case a re-login will be enforced as the initial (cookie-encrypted) access token is lost.
Yes.
cookie expiry is longer than access token expiry: in that case a re-login will also be enforced as the access token expired.
No. The session does not currently expire when the token does as far as I am aware.
There was a problem hiding this comment.
Choose a reason for hiding this comment
The reason will be displayed to describe this comment to others. Learn more.
No. The session does not currently expire when the token does as far as I am aware.
This seems counter-intuitive to me, do we want to change the current behaviour to match suggestion from @s-urbaniak?
from a user perspective the actual "session timeout" is min(cookie-expiry, access token expiry)
There was a problem hiding this comment.
Choose a reason for hiding this comment
The reason will be displayed to describe this comment to others. Learn more.
I'd still consider some override method per instance so different instances cna have different timeouts without requiring editing each operator or each deployment.
Let's say we can have an override per namespace and/or a matchLabel selector.
What I mean, no matter the implementation, is that it would be desirable to have configurable (and why not different) values for token/cookie timeout.
There was a problem hiding this comment.
Choose a reason for hiding this comment
The reason will be displayed to describe this comment to others. Learn more.
This seems counter-intuitive to me, do we want to change the current behaviour to match suggestion from @s-urbaniak?
currently not possible without further development investment into the oauth-proxy
I'd still consider some override method per instance so different instances cna have different timeouts without requiring editing each operator or each deployment.
use options of the binary
1. -cookie-expire | ||
2. -cookie-refresh | ||
3. logout-url (not yet present). If created, it will have a spec and a status. | ||
If the spec is empty, the status will be filled in based on the console's logout URL. |
There was a problem hiding this comment.
Choose a reason for hiding this comment
The reason will be displayed to describe this comment to others. Learn more.
This implies there's a controller that handles it. Which operator should house it, KAS-o, CAO?
3. logout-url (not yet present). If created, it will have a spec and a status. | ||
If the spec is empty, the status will be filled in based on the console's logout URL. | ||
4. http proxy settings. This will have a spec and a status. | ||
If the values in proxies.config.openshift.io .status do not have credentials in them, we can automatically copy them |
There was a problem hiding this comment.
Choose a reason for hiding this comment
The reason will be displayed to describe this comment to others. Learn more.
this is just about proxy/cluster, right? Not that I would expect more than a single instance of that object in the cluster.
accidentally allowing reading of more private data (versus attaching to oauth.config.openshift.io for instance). | ||
Its .spec stanza contain typed fields for certain flags. | ||
|
||
The following flags for oauth-proxy are good targets for being consistent for all oauth-proxy instances in the cluster. |
There was a problem hiding this comment.
Choose a reason for hiding this comment
The reason will be displayed to describe this comment to others. Learn more.
So if I understand correctly, on our end the only thing we would need to change in cluster-monitoring-operator to just need to give the service accounts used by the proxy permissions to read oauthproxies.config.openshift.io.?
There was a problem hiding this comment.
Choose a reason for hiding this comment
The reason will be displayed to describe this comment to others. Learn more.
So if I understand correctly, on our end the only thing we would need to change in cluster-monitoring-operator to just need to give the service accounts used by the proxy permissions to read oauthproxies.config.openshift.io.?
If we did this, we would grant read access to all serviceaccounts. Will update the proposal.
There was a problem hiding this comment.
Choose a reason for hiding this comment
The reason will be displayed to describe this comment to others. Learn more.
Just to clarify, this means that no additional permissions should be needed for our service accounts, right?
### Version Skew Strategy | ||
|
||
oauth-proxy users can only rely on this feature in the openshift release it is delivered in. | ||
If they need to work on a prior level of openshift, they will have to set the values on the oauth-proxy binary. |
There was a problem hiding this comment.
Choose a reason for hiding this comment
The reason will be displayed to describe this comment to others. Learn more.
can you elaborate? does that mean that the above config custom resource will be backported but not the
auto-detection mechanism in oauth-proxy?
There was a problem hiding this comment.
Choose a reason for hiding this comment
The reason will be displayed to describe this comment to others. Learn more.
can you elaborate? does that mean that the above config custom resource will be backported but not the
auto-detection mechanism in oauth-proxy?
This means that if a new oauth-proxy (with auto-detection) goes on an old kube-apiserver (without the oauthproxies resource), then the auto-detection will produce a warning message, but continue with only flag based configuration.
Monitoring is in the payload with kube-apiserver, so it moves in lock-step. OLM based operators have to handle skew one way or the other.
will update the doc.
If the values in proxies.config.openshift.io .status do not have credentials in them, we can automatically copy them | ||
to the https proxy status settings for oauthproxies. | ||
If they do have credentials in them, we will not set the value by default. | ||
The spec will always have precedence. |
There was a problem hiding this comment.
Choose a reason for hiding this comment
The reason will be displayed to describe this comment to others. Learn more.
what will happen with proxy CAs? anything?
general +1 from me on the direction of this proposal |
Sorry if this is obvious to others, but I am looking at how optional operators would use this config and I want to see if I understand the UX.
Have we considered giving a way for someone to request an OAuth proxy from openshift, that could always conform to this config, be kept up to date, and allow the operator to not need to know about the image? |
no, because we already have the oauth imagestreamtag they should be referencing from their deployment. the oauth image is part of the OCP release payload, so it's always mirrored, as long as they reference it via the imagestreamtag, they'll get the image from the payload.
not sure what you mean by "conform to the config"? it's a global config, if the operator provides no config to the oauthproxy sidecar, it'll get the global config. If they provide their own config(via args) it'll override the cluster/global config
i don't know if it needs to be tested on a per operator basis...yes the auth team would need to have tests to prove the oauthproxy image respects the global config.
we can't stop someone from referencing+deploying any image they want |
Sorry I missed that part, thought the operator had to look at the config. |
Will this allow the ciphers to be able to be customized or to match the cipher config for the OCP ingress controller? |
Issues go stale after 90d of inactivity. Mark the issue as fresh by commenting If this issue is safe to close now please do so with /lifecycle stale |
Stale issues rot after 30d of inactivity. Mark the issue as fresh by commenting If this issue is safe to close now please do so with /lifecycle rotten |
Rotten issues close after 30d of inactivity. Reopen the issue by commenting /close |
@openshift-bot: Closed this PR. In response to this:
Instructions for interacting with me using PR comments are available here. If you have questions or suggestions related to my behavior, please file an issue against the kubernetes/test-infra repository. |
[APPROVALNOTIFIER] This PR is NOT APPROVED This pull-request has been approved by: The full list of commands accepted by this bot can be found here.
Needs approval from an approver in each of these files:
Approvers can indicate their approval by writing |
/lifecycle frozen |
This proposal is over a year old. As part of recent efforts to clean up old pull requests, I am removing the life-cycle/frozen label to allow it to age out and be closed. If the proposal is still active, please restore the label. /remove-lifecycle frozen |
Inactive enhancement proposals go stale after 28d of inactivity. See https://github.com/openshift/enhancements#life-cycle for details. Mark the proposal as fresh by commenting If this proposal is safe to close now please do so with /lifecycle stale |
Stale enhancement proposals rot after 7d of inactivity. See https://github.com/openshift/enhancements#life-cycle for details. Mark the proposal as fresh by commenting If this proposal is safe to close now please do so with /lifecycle rotten |
Rotten enhancement proposals close after 7d of inactivity. See https://github.com/openshift/enhancements#life-cycle for details. Reopen the proposal by commenting /close |
@openshift-bot: Closed this PR. In response to this:
Instructions for interacting with me using PR comments are available here. If you have questions or suggestions related to my behavior, please file an issue against the kubernetes/test-infra repository. |
/reopen |
@stlaz: Reopened this PR. In response to this:
Instructions for interacting with me using PR comments are available here. If you have questions or suggestions related to my behavior, please file an issue against the kubernetes/test-infra repository. |
@stlaz: The In response to this:
Instructions for interacting with me using PR comments are available here. If you have questions or suggestions related to my behavior, please file an issue against the kubernetes/test-infra repository. |
[APPROVALNOTIFIER] This PR is NOT APPROVED This pull-request has been approved by: The full list of commands accepted by this bot can be found here.
Needs approval from an approver in each of these files:
Approvers can indicate their approval by writing |
@deads2k: The following test failed, say
Full PR test history. Your PR dashboard. Instructions for interacting with me using PR comments are available here. If you have questions or suggestions related to my behavior, please file an issue against the kubernetes/test-infra repository. I understand the commands that are listed here. |
Rotten enhancement proposals close after 7d of inactivity. See https://github.com/openshift/enhancements#life-cycle for details. Reopen the proposal by commenting /close |
@openshift-bot: Closed this PR. In response to this:
Instructions for interacting with me using PR comments are available here. If you have questions or suggestions related to my behavior, please file an issue against the kubernetes/test-infra repository. |
The oauth-proxy is a commonly used proxy which allows a naive application to make use of in-cluster authentication and authorization to protect itself on an openshift platform without changing the application itself. This effectively makes the application appear as an extension to our platform, examples include: logging kibana, monitoring grafana, etc. There are many configuration values that are considered as related to platform configuration or security that the oauth-proxy could auto-detect.
@s-urbaniak @lilic @stlaz @sttts This is what I'm thinking about for some of the settings. Would it help?