-
Notifications
You must be signed in to change notification settings - Fork 1.5k
add an option for skipping tls verifiation on logs requests #1295
New issue
Have a question about this project? Sign up for a free GitHub account to open an issue and contact its maintainers and the community.
By clicking “Sign up for GitHub”, you agree to our terms of service and privacy statement. We’ll occasionally send you account related emails.
Already on GitHub? Sign in to your account
Comments
@deads2k Is this an Enhancement that should be tracked for 1.17? If so, can you reformat the issue with the KEP Issue template?
|
Issues go stale after 90d of inactivity. If this issue is safe to close now please do so with Send feedback to sig-testing, kubernetes/test-infra and/or fejta. |
/remove-lifecycle stale |
Issues go stale after 90d of inactivity. If this issue is safe to close now please do so with Send feedback to sig-testing, kubernetes/test-infra and/or fejta. |
/remove-lifecycle stale |
/remove-milestone v1.17 |
/milestone clear (removing this issue from v1.17 milestone as the milestone is complete) |
Issues go stale after 90d of inactivity. If this issue is safe to close now please do so with Send feedback to sig-testing, kubernetes/test-infra and/or fejta. |
/remove-lifecycle stale |
Hi @deads2k Enhancements Lead here. Any plans for this in 1.20? Thanks! |
Hi @deads2k Any plans for this in 1.20? Enhancements Freeze is next week Tuesday October 6th As a note, the format of KEPs has changed. If you could please update and include the missing sections noted above that would be great. See for ref https://github.com/kubernetes/enhancements/tree/master/keps/NNNN-kep-template Best, |
Issues go stale after 90d of inactivity. If this issue is safe to close now please do so with Send feedback to sig-testing, kubernetes/test-infra and/or fejta. |
Hi @deads2k, this issue is now being tracked for the 1.21 release. Could you also confirm if this enhancement will be graduating to stable for 1.21? |
yes, we are targeting 1.21. I am tidying up the KEP and PRR. |
Hey @deads2k |
There is no work expected from sig-cli, at most minor cleanups, so 👍 for sig-cli pov. |
Hey @deads2k, it looks like one of the PRR requirements has not been met for this enhancement. You need to create a Please make sure this requirement is met before the enhancement freeze, Feb. 9th. Thank you! |
Hi @deads2k , Enhancements Freeze is 2 days away, Feb 9th EOD PST This KEP looks good. Any enhancements that do not complete the following requirements by the freeze will require an exception. [DONE] EDIT: with PR #2476 merged, this KEP looks good. |
With PR #2476 merged in, this enhancement has met all requirements for the enhancements freeze 👍 |
Hi @deads2k, Since your Enhancement is scheduled to be in 1.21, please keep in mind the important upcoming dates:
As a reminder, please link all of your k/k PR(s) and k/website PR(s) to this issue so we can track them. Thanks! |
Hi @deads2k , Enhancements team does not have a linked PR to track for the upcoming code freeze. Could you please link a PR to this KEP so that we may track it. We are currently marking this KEP as 'At Risk' for the upcoming code freeze on 3/9. Thanks |
Hi @deads2k, this KEP has been marked as implemented in the KEP document. Is there an outstanding PR that is expected to be merged before code freeze on 3/9? Thanks! |
Hi @deads2k , A friendly reminder that Code freeze is 4 days away, March 9th EOD PST Any enhancements that are NOT code complete by the freeze will be removed from the milestone and will require an exception to be added back. Please also keep in mind that if this enhancement requires new docs or modification to existing docs, you'll need to follow the steps in the Open a placeholder PR doc to open a PR against k/website repo by March 16th EOD PST Thanks! |
Stale issues rot after 30d of inactivity. If this issue is safe to close now please do so with Send feedback to sig-contributor-experience at kubernetes/community. |
/remove-lifecycle rotten |
This landed as GA in 1.21 /close |
@deads2k: Closing this issue. In response to this:
Instructions for interacting with me using PR comments are available here. If you have questions or suggestions related to my behavior, please file an issue against the kubernetes/test-infra repository. |
Insecure Backend Proxy for pods/logs
If a client chooses, it is possible to bypass the default behavior of the kube-apiserver and allow the kube-apiserver to skip TLS verification of the kubelet to allow gathering logs
Kubernetes Enhancement Proposal: https://github.com/kubernetes/enhancements/tree/master/keps/sig-api-machinery/1295-insecure-backend-proxy
Discussion Link: This KEP predates the template. I have no memory of where this is
Primary contact (assignee): @deads2k
Responsible SIGs: apimachinery, auth
Enhancement target (which target equals to which milestone):
Alpha
k/enhancements
) update PR(s):k/k
) update PR(s):k/website
) update PR(s):Beta
k/enhancements
) update PR(s):k/k
) update PR(s):k/website
) update(s):Stable
k/enhancements
) update PR(s): update insecure-backend-proxy feature to target GA. Add PRR #2237k/k
) update PR(s):k/website
) update(s):Please keep this description up to date. This will help the Enhancement Team to track the evolution of the enhancement efficiently.
When trying to get logs for a pod, it is possible for a kubelet to have an expired serving certificate. If a client chooses, it should be possible to bypass the default behavior of the kube-apiserver and allow the kube-apiserver to skip TLS verification of the kubelet to allow gathering logs. This is safe because the kube-apiserver's credentials are always client certificates which cannot be replayed by an evil-kubelet and risk is contained to an evil-kubelet returning false log data. If the user has chosen to accept this risk, we should allow it for the same reason we have an option for --insecure-skip-tls-verify.
On self-hosted clusters it is possible to end up in a state where a kubelet's serving certificate has expired so a kube-apiserver cannot verify the kubelet identity, but the kube-apiserver's client certificate is still valid so the kubelet can still verify the kube-apiserver. In this condition, a cluster-admin may need to get pod logs to debug his cluster.
@kubernetes/sig-api-machinery-feature-requests
@kubernetes/sig-cli-feature-requests
@kubernetes/sig-auth-feature-requests
https://github.com/kubernetes/enhancements/blob/master/keps/sig-api-machinery/20190927-insecure-backend-proxy.md
The text was updated successfully, but these errors were encountered: