Skip to content
New issue

Have a question about this project? Sign up for a free GitHub account to open an issue and contact its maintainers and the community.

By clicking “Sign up for GitHub”, you agree to our terms of service and privacy statement. We’ll occasionally send you account related emails.

Already on GitHub? Sign in to your account

Components logs sanitization #3889

Closed
fabriziopandini opened this issue Nov 1, 2020 · 16 comments
Closed

Components logs sanitization #3889

fabriziopandini opened this issue Nov 1, 2020 · 16 comments
Assignees
Labels
help wanted Denotes an issue that needs help from a contributor. Must meet "help wanted" guidelines. kind/feature Categorizes issue or PR as related to a new feature. lifecycle/frozen Indicates that an issue or PR should not be auto-closed due to staleness. priority/important-longterm Important over the long term, but may not be staffed and/or may need multiple releases to complete. sig/instrumentation Categorizes an issue or PR as relevant to SIG Instrumentation. sig/security Categorizes an issue or PR as relevant to SIG Security.

Comments

@fabriziopandini
Copy link
Member

User Story

As a user, I would like Cluster API adopts the same logs sanitization rules used in Kubernetes

Detailed Description

Wtih KEP-1753: Kubernetes system components logs sanitization Kubernetes is introducing a set of sanitization rules leveraging on klog and golang tags.

Those tags will be used for ensuring this data will not be written to logs by Kubernetes system components.

List of datapolicy tags available:

  • security-key - for TLS certificate keys. Keywords: key, cert, pem
  • token - for HTTP authorization tokens. Keywords: token, secret, header, auth
  • password - anything passwordlike. Keywords: password

Anything else you would like to add:

/kind feature
/sig instrumentation security

@k8s-ci-robot k8s-ci-robot added kind/feature Categorizes issue or PR as related to a new feature. sig/instrumentation Categorizes an issue or PR as relevant to SIG Instrumentation. sig/security Categorizes an issue or PR as relevant to SIG Security. labels Nov 1, 2020
@vincepri
Copy link
Member

vincepri commented Nov 2, 2020

/milestone v0.4.0
/help
/priority important-longterm

@k8s-ci-robot
Copy link
Contributor

@vincepri:
This request has been marked as needing help from a contributor.

Please ensure the request meets the requirements listed here.

If this request no longer meets these requirements, the label can be removed
by commenting with the /remove-help command.

In response to this:

/milestone v0.4.0
/help
/priority important-longterm

Instructions for interacting with me using PR comments are available here. If you have questions or suggestions related to my behavior, please file an issue against the kubernetes/test-infra repository.

@k8s-ci-robot k8s-ci-robot added this to the v0.4.0 milestone Nov 2, 2020
@k8s-ci-robot k8s-ci-robot added priority/important-longterm Important over the long term, but may not be staffed and/or may need multiple releases to complete. help wanted Denotes an issue that needs help from a contributor. Must meet "help wanted" guidelines. labels Nov 2, 2020
@fabriziopandini
Copy link
Member Author

Quoting from the proposal the implementation plan

Alpha (1.20)

  • All well-known places which contain sensitive data have been tagged.
  • All external well-known types are handled by the GlobalDatapolicyMapping function.
  • It is possible to turn on logs sanitization for all Kubernetes components including API Server, Scheduler, Controller manager and kubelet using --logging-sanitization flag.
    Beta (1.21)
  • Performance overhead related to enabling dynamic logs sanitization has been reduced to 50% compared to time spent in - -- klog library functions with this feature disabled. This should be verified using Kubernetes scalability tests.
    GA (1.22)
  • Logs sanitization is enabled by default.

I think we should postpone this after 1.21, when performance issues are addressed

@vincepri
Copy link
Member

vincepri commented Jan 4, 2021

/milestone Next

@k8s-ci-robot k8s-ci-robot modified the milestones: v0.4.0, Next Jan 4, 2021
@fejta-bot
Copy link

Issues go stale after 90d of inactivity.
Mark the issue as fresh with /remove-lifecycle stale.
Stale issues rot after an additional 30d of inactivity and eventually close.

If this issue is safe to close now please do so with /close.

Send feedback to sig-contributor-experience at kubernetes/community.
/lifecycle stale

@k8s-ci-robot k8s-ci-robot added the lifecycle/stale Denotes an issue or PR has remained open with no activity and has become stale. label Apr 4, 2021
@fejta-bot
Copy link

Stale issues rot after 30d of inactivity.
Mark the issue as fresh with /remove-lifecycle rotten.
Rotten issues close after an additional 30d of inactivity.

If this issue is safe to close now please do so with /close.

Send feedback to sig-contributor-experience at kubernetes/community.
/lifecycle rotten

@k8s-ci-robot k8s-ci-robot added lifecycle/rotten Denotes an issue or PR that has aged beyond stale and will be auto-closed. and removed lifecycle/stale Denotes an issue or PR has remained open with no activity and has become stale. labels May 4, 2021
@sbueringer
Copy link
Member

/remove-lifecycle rotten

@k8s-ci-robot k8s-ci-robot removed the lifecycle/rotten Denotes an issue or PR that has aged beyond stale and will be auto-closed. label May 4, 2021
@k8s-triage-robot
Copy link

Issues go stale after 90d of inactivity.
Mark the issue as fresh with /remove-lifecycle stale.
Stale issues rot after an additional 30d of inactivity and eventually close.

If this issue is safe to close now please do so with /close.

Send feedback to sig-contributor-experience at kubernetes/community.
/lifecycle stale

@k8s-ci-robot k8s-ci-robot added the lifecycle/stale Denotes an issue or PR has remained open with no activity and has become stale. label Aug 2, 2021
@vincepri
Copy link
Member

/lifecycle frozen

@k8s-ci-robot k8s-ci-robot added lifecycle/frozen Indicates that an issue or PR should not be auto-closed due to staleness. and removed lifecycle/stale Denotes an issue or PR has remained open with no activity and has become stale. labels Aug 10, 2021
@sbueringer
Copy link
Member

sbueringer commented Feb 7, 2022

I think this PR #6072 could provide us the sanitizing functionality in logging, so that we only have to go over the structs and tag them (if there are any)

@sbueringer
Copy link
Member

/assign

I'll re-evaluate once when/if the JSON log format PR is merged.

@sbueringer
Copy link
Member

sbueringer commented Feb 17, 2022

The upstream dynamic log sanitization filter has been deprecated and will be removed (Deprecation PR).

The tags are still used and there is a KEP for static analysis: https://github.com/kubernetes/enhancements/tree/master/keps/sig-security/1933-secret-logging-static-analysis

It sounds interesting, but we have to test if it makes sense for us.

@fabriziopandini
Copy link
Member Author

given that in CAPI we are storing all sensitive info in secrets, I'm leaning towards closing this issue (decorating types and running the static analysis won't provide a great signal).

@sbueringer
Copy link
Member

sbueringer commented Feb 18, 2022

The static analysis is pretty fancy. I would keep the issue for now, I just want to try the linter at some point.
(and then evaluate if it makes sense for us nor not)

@fabriziopandini fabriziopandini added the triage/accepted Indicates an issue or PR is ready to be actively worked on. label Jul 29, 2022
@fabriziopandini fabriziopandini removed this from the Next milestone Jul 29, 2022
@fabriziopandini fabriziopandini removed the triage/accepted Indicates an issue or PR is ready to be actively worked on. label Jul 29, 2022
@sbueringer
Copy link
Member

I took another look. The tool itself looks interesting, but I couldn't get it to work. It also looks like upstream Kubernetes has issues with Go 1.19 and levee and requires fixes in a Go lib to get it to work again.

I even couldn't get it to work with Go 1.18 locally.

So overall I agree, given that almost all our secrets are in Secrets and we have a lot of untyped JSON patches in which secrets basically can't be detected I would close this issue as the benefit doesn't outweigh the effort to get it to work and to maintain it over time.

/close

@k8s-ci-robot
Copy link
Contributor

@sbueringer: Closing this issue.

In response to this:

I took another look. The tool itself looks interesting, but I couldn't get it to work. It also looks like upstream Kubernetes has issues with Go 1.19 and levee and requires fixes in a Go lib to get it to work again.

I even couldn't get it to work with Go 1.18 locally.

So overall I agree, given that almost all our secrets are in Secrets and we have a lot of untyped JSON patches in which secrets basically can't be detected I would close this issue as the benefit doesn't outweigh the effort to get it to work and to maintain it over time.

/close

Instructions for interacting with me using PR comments are available here. If you have questions or suggestions related to my behavior, please file an issue against the kubernetes/test-infra repository.

sig-security-tracker automation moved this from To do to Done Sep 6, 2022
Sign up for free to join this conversation on GitHub. Already have an account? Sign in to comment
Labels
help wanted Denotes an issue that needs help from a contributor. Must meet "help wanted" guidelines. kind/feature Categorizes issue or PR as related to a new feature. lifecycle/frozen Indicates that an issue or PR should not be auto-closed due to staleness. priority/important-longterm Important over the long term, but may not be staffed and/or may need multiple releases to complete. sig/instrumentation Categorizes an issue or PR as relevant to SIG Instrumentation. sig/security Categorizes an issue or PR as relevant to SIG Security.
Development

No branches or pull requests

6 participants