New issue
Have a question about this project? Sign up for a free GitHub account to open an issue and contact its maintainers and the community.
By clicking “Sign up for GitHub”, you agree to our terms of service and privacy statement. We’ll occasionally send you account related emails.
Already on GitHub? Sign in to your account
Investigate extending log sanitization to audit logs #109376
Comments
/sig auth |
The Kubernetes project currently lacks enough contributors to adequately respond to all issues and PRs. This bot triages issues and PRs according to the following rules:
You can:
Please send feedback to sig-contributor-experience at kubernetes/community. /lifecycle stale |
The Kubernetes project currently lacks enough active contributors to adequately respond to all issues and PRs. This bot triages issues and PRs according to the following rules:
You can:
Please send feedback to sig-contributor-experience at kubernetes/community. /lifecycle rotten |
/triage accepted To move this issue forward, we need someone who's interested to bring this topic and a design to sig auth community call. This would most likely require an extension of an existing KEP or a new one. |
The Kubernetes project currently lacks enough contributors to adequately respond to all issues and PRs. This bot triages issues and PRs according to the following rules:
You can:
Please send feedback to sig-contributor-experience at kubernetes/community. /lifecycle stale |
/remove-lifecycle stale |
This issue has not been updated in over 1 year, and should be re-triaged. You can:
For more details on the triage process, see https://www.kubernetes.dev/docs/guide/issue-triage/ /remove-triage accepted |
The Kubernetes project currently lacks enough contributors to adequately respond to all issues. This bot triages un-triaged issues according to the following rules:
You can:
Please send feedback to sig-contributor-experience at kubernetes/community. /lifecycle stale |
What would you like to be added?
(This is an issue being moved from https://github.com/kubernetes-security/security-disclosures-low/issues/9 to be worked on in public)
In the past, we've had a medium-severity vulnerability reported to the Kubernetes project for a misconfigured audit-logging. The core of the original issue was that the default Kubernetes audit log policy in the cluster/gce/gci/configure-helper.sh script (https://github.com/kubernetes/kubernetes/blob/v1.21.1/cluster/gce/gci/configure-helper.sh#L1215-L1224) is set to “Metadata” for Secrets, ConfigMaps, and TokenReviews, but should also include Service Account Tokens.
This resulted in tokens being emitted in audit logs.
One of the ideas the SRC wanted to pursue was to see whether the existing log sanitization functionality could be extended to cover audit logs, so that sensitive data like tokens is automatically masked, without requiring audit logging policies to be updated for each new potentially-sensitive request.
Existing sanitization KEPs are:
Based on my initial reading, the existing functionality only covers debug logs --- if you call log.FilterLog, that's hooked up to runtime sanitization. The actual sanitization logic is based on tagged struct fields.
/sig-auth
Why is this needed?
Having audit log sanitization would close off a class of security vulnerabilities.
The text was updated successfully, but these errors were encountered: