New issue
Have a question about this project? Sign up for a free GitHub account to open an issue and contact its maintainers and the community.
By clicking “Sign up for GitHub”, you agree to our terms of service and privacy statement. We’ll occasionally send you account related emails.
Already on GitHub? Sign in to your account
[PodSecurity] Aggregate identical warnings for multiple pods in a namespace #103213
Comments
First test the current situation before making changes. |
If a pod has multiple causes of errors, [noruntimeclasspod: message, message2 runtimeclass1pod: message, message2 runtimeclass2pod: message, message2 runtimeclass3pod: message1, message2 runtimeclass4pod: message1, message2, runtimeclass5pod: message1, runtimeclass6pod: message2] I have two thoughts
[(message, message2): [noruntimeclasspod, runtimeclass1pod, runtimeclass2pod],
(message1, message2): [runtimeclass3pod, runtimeclass4pod],
(message1): [runtimeclass5pod],
(message2): [runtimeclass6pod]]
[(message): [noruntimeclasspod, runtimeclass1pod, runtimeclass2pod]
(message1): [runtimeclass3pod, runtimeclass4pod, runtimeclass5pod],
(message2): [noruntimeclasspod, runtimeclass1pod, runtimeclass2pod, runtimeclass3pod, runtimeclass4pod, runtimeclass6pod]] The According to requirements, I might choose |
@njuptlzf I prefer the second approach you propose as well. The only caveat: for a namespace with a small number of pods (1 in the extreme case), this approach would actually end up being a lot more verbose than the un-aggregated case. One option is we could set some threshold for |
Thinking through the workflow of someone dealing with the warnings across multiple pods... to fix warnings, they still have to visit the individual pod or workload definitions and make appropriate changes. To make that easy, I still think the pod should be the primary unit of organization, not the individual messages. The main reason I think we should aggregate pods with identical warnings is that it is common to have effectively identical pods created from the same workload controller. If I have a replicaset with 100 pods, I'd rather see Taking the example above:
it's easier to figure out everything I need to do to fix specific pods/workloads if we aggregate to this format:
I can then fix those four pods/workloads completely, then rerun the dry-run to check my work. If I modified root workload definitions that affected other elided pods, great! If some of the elided pods just happened to have identical warnings but were not controlled by the same root workload definitions, then they'll be surfaced by name when I recheck. |
Okay, let me modify the logic and ut again. |
@liggitt that makes sense, but we might want to key off the controller in addition to the error messages. E.g.
or is that getting too verbose? |
hmm... I don't feel super-strongly either way, but that wouldn't necessarily point you at the object you'd actually need to edit to fix the issue... in the case of a deployment, you'd want to edit the deployment, but the pod ownerRefs would be pointing at the intermediate replicaset. |
The text was updated successfully, but these errors were encountered: