[WIP] Security process(es) doc#8952
Conversation
Signed-off-by: Davanum Srinivas <davanum@gmail.com>
6155765 to
92efa72
Compare
|
cc @PushkarJ |
|
|
||
| The properties we defend — authentication, authorization, admission, tenant | ||
| isolation, secret handling, component integrity — assume a cluster where | ||
| the operator has applied documented secure defaults. Operator-chosen |
There was a problem hiding this comment.
This bit will open up a question of "which documented secure defaults we mean", as distros often have different ideas of secure defaults. I can initially think of a couple of options here
- Pick a set of defaults to reference (e.g. Kubeadm defaults) I would lean against saying "Kubernetes component defaults" here as some of those are not what people run in production clusters.
- Exclude anything that can be configured (i.e. there is an option of secure configuration available) from Kubernetes CVEs. so that way a valid report might be "this is insecure and there's no configuration option to make it secure" but "hey out of the box this is insecure" would not be.
Honestly that last one might make more sense. Realistically out of the box, Kubernetes is not a secure enterprise platform, it needs external software and/or additional configuration (e.g. no default netpol, anyone who can create pods gets root on any node)
There was a problem hiding this comment.
Would like some SRC input on this for sure :)
| be reproduced.** | ||
| - **Compliance-framework findings that do not describe a concrete | ||
| flaw.** That conversation belongs between the compliance vendor and | ||
| the operator, not with the SRC. |
There was a problem hiding this comment.
Do we want an exclusion for "things we already know about" here, to avoid people re-discovering old issues and thinking they're a new finding. For this sources like the results of 3rd party audit would be a good starting point, also there's also things like open GH issues (e.g. 18982) and some things like https://kinvolk.io/blog/2019/02/abusing-kubernetes-apiserver-proxying which don't even have a GH issue.
Ideally I guess it'd be nice to have a list of all of those in one place, but that's not something that exists at the moment(AFAIK)
There was a problem hiding this comment.
Can you please suggest some language i can incorporate @raesene ? thanks!
Co-authored-by: Pushkar Joglekar <3390906+PushkarJ@users.noreply.github.com>
Co-authored-by: Pushkar Joglekar <3390906+PushkarJ@users.noreply.github.com>
Co-authored-by: Pushkar Joglekar <3390906+PushkarJ@users.noreply.github.com>
PushkarJ
left a comment
There was a problem hiding this comment.
Thanks for putting this together. This doc compiles / summarizes text spread out across several different places with appropriate links. I urge all my fellow maintainers to use this as guidance to decide what is worth their time when it comes to vulnerability related requests as well as use it as a shield when urgent fixes are requested with no regard for precious maintainer time.
Optionally: It would be great to get a review from @kubernetes/security-response-committee and @kubernetes/sig-contributor-experience to ensure nothing is accidentally missed or mis-represented.
/lgtm
|
[APPROVALNOTIFIER] This PR is NOT APPROVED This pull-request has been approved by: dims, PushkarJ The full list of commands accepted by this bot can be found here. DetailsNeeds approval from an approver in each of these files:Approvers can indicate their approval by writing |
| operational model.** | ||
| - **Weak algorithms, missing headers, or deprecated settings an | ||
| operator opted into by configuration.** | ||
| - **AI-generated reports whose code paths, symbols, or traces cannot |
There was a problem hiding this comment.
does it matter the origin, if is not reproducible it does not really matter if is AI generated or manually ?
I think that this is a more call of warning that "AI automated report must be curated and validated by the reporter"
| The Kubernetes project is a CVE Numbering Authority for components in | ||
| the `kubernetes/kubernetes` repository. CVEs in upstream dependencies | ||
| (Go, containerd, etcd, runc, kernel) are the responsibility of their | ||
| respective CNAs. CVEs in out-of-tree Kubernetes SIG projects are |
There was a problem hiding this comment.
does this really mean "out-of-tree Kubernetes SIG projects"? or not kubernetes/kubernetes and SIG projects? because all kubernetes sig projects are out of tree, unless we consider the ones that we revendor in kubernetes/kubernetes 🤔
|
+1 some non-binding feedback. |
| be reproduced.** | ||
| - **Compliance-framework findings that do not describe a concrete | ||
| flaw.** That conversation belongs between the compliance vendor, distributor and operator, not with the SRC. | ||
|
|
There was a problem hiding this comment.
| - **Issues relating to known Kubernetes project architectural choices and accepted risks. Anything that's already been noted in a existing security audit or public blog which the project has chosen not to fix by issuing a patch. |
The comment's dropped off the review list but this was some possible wording addressing the idea that we won't issue CVEs for things we already know about. The idea here is to avoid submissions (especially AI gen ones) that are just based on existing known issues (e.g. the unclosed ones from third party audits, the unpatchable CVEs or things like the kinvolk pod proxy attack)
Which issue(s) this PR fixes:
Fixes #