Skip to content
New issue

Have a question about this project? Sign up for a free GitHub account to open an issue and contact its maintainers and the community.

By clicking “Sign up for GitHub”, you agree to our terms of service and privacy statement. We’ll occasionally send you account related emails.

Already on GitHub? Sign in to your account

[BUG] Setting validationFailureAction to enforce is going to enforce it for every Policy. #567

Closed
f1ko opened this issue Dec 19, 2019 · 5 comments
Assignees
Labels
bug Something isn't working

Comments

@f1ko
Copy link
Contributor

f1ko commented Dec 19, 2019

Describe the bug
Setting validationFailureAction: enforce for one policy is going to enforce it on every policy.

To Reproduce

  1. Install kyverno by using the commit a5aa8669ff772134aa2b1dac92b9bb15c7b1e68c.
  2. Apply 2 validation policies (validationFailureAction set to audit on both of them).
  3. Deploy something that violates one of the policies -> gets deployed.
  4. Delete whatever you deployed.
  5. For the policy that did NOT interfere with your deployment set validationFailureAction to enforce.
  6. Deploy again.
  7. See error message of the policy that still has validationFailureAction set to audit.

Expected behavior
Each policy should act according to its own validationFailureAction.

@f1ko f1ko added the bug Something isn't working label Dec 19, 2019
@realshuting
Copy link
Member

@f1ko This is known behavior of current logic, we would block the request when there is an enforce policy exists, no matter which policy this request violates, and report policy violations on those audit policies, see discussion here.

It does seem a bit confusing that flag validationFailureAction enforces behavior across policies. We are working on the change to have each policy acts on its own validationFailureAction, and to display the "audit" errors as part of the returned message instead of creating violations.

@realshuting
Copy link
Member

The fix is in #601.

Now the expected behavior is that each policy will be blocked on its own validationFailureAction. When a policy is set to enforce, the reporting mechanism works as follows (in kubectl flavor), after discussion with @JimBugwadia :

  • When the user creates the resource that has the Kyverno policy applies directly on it (i.e., the rule defined on Pod and the user creates a Pod): the requestBlocked message would return immediately from command kubectl create, the user can expect same behavior when deploying podControllers (Pod policies should automatically be applied to pod controllers #518).
  • If Pod controller feature is disabled, for example, there's only a policy apply to Pod while the user is creating a deployment, then:
    1. the pod creation would be blocked
    2. violation would not be reported on resource owner anymore, instead, the user could check the deployment manifest to get the detailed information
    3. an event would be created on resource owner, if exist, by Kubernetes

@realshuting
Copy link
Member

realshuting commented Jan 7, 2020

There was one concern from @shivdudhani with regard to removing violations reported on resource owner (for enforce policy):

the whole idea of adding OwnerRef was to support any resources not just pod-controllers, as then we can report and this is how the feature was specified. With this we only support pod-controllers, so this will mean we are removing a feature. If this is the case, we can mention we have removed a feature.

The initial proposal to create violation with owner was to report / alarm something wrong happens rather than going silent, most cases, on podControllers.

@JimBugwadia Any inputs?

@realshuting
Copy link
Member

Since we introduced variable substitution feature, now the policy can be flexible on the resource that applies to. The idea is to re-use some patterns defined in the rule to extend the policy. For example, argo-rollouts is a CRD that refer to the Deployment, and sequentially create pods.

@realshuting realshuting added this to the Kyverno Release 1.1.0 milestone Jan 10, 2020
@realshuting
Copy link
Member

Close via #601.

Sign up for free to join this conversation on GitHub. Already have an account? Sign in to comment
Labels
bug Something isn't working
Projects
None yet
Development

No branches or pull requests

2 participants