Skip to content
New issue

Have a question about this project? Sign up for a free GitHub account to open an issue and contact its maintainers and the community.

By clicking “Sign up for GitHub”, you agree to our terms of service and privacy statement. We’ll occasionally send you account related emails.

Already on GitHub? Sign in to your account

Failure vs successes counts are inconsistent #731

Closed
consolethinks opened this issue Jul 29, 2022 · 4 comments
Closed

Failure vs successes counts are inconsistent #731

consolethinks opened this issue Jul 29, 2022 · 4 comments
Labels
bug Something isn't working

Comments

@consolethinks
Copy link

Conftest doesn't count passed tests in the total amount of tests, or number of passed tests when the policy is not in the main namespace. It comes across as if the test doesn't even exist when it passes.

If, for example, I have a common namespace and a safety namespace, in addition to the main namespace, and I either pass --all-namespaces or list them separately using --namespace [name], only the policies in the main namespace will have its passed policies counted, any other namespaces' passed policies will be ignored from the total and passed counts.

@jalseth
Copy link
Member

jalseth commented Jul 31, 2022

Hi @consolethinks I'm not able to reproduce this issue with conftest v0.33.2. Please provide snippets and the exact conftest command you are using so I can reproduce the issue.

@consolethinks
Copy link
Author

After doing some testing, it's not related to namespaces after all. One policy in a specific namespace seems to be the culprit.

To reproduce my issue, I'm giving an example terraform plan outputted as json:
tfplan.json.zip

And here are two policies:

package common

key_val_valid_pascal_case(key, val) {
    is_pascal_case(key)
    is_pascal_case(val)
}

is_pascal_case(string) {
    re_match(`^([A-Z][a-z0-9]+)+`, string)
}


deny[msg] {
    changeset := input.resource_changes[_]
    tags := changeset.change.after.tags
    some key
    not is_pascal_case(tags[key])
    msg := sprintf("general - Non-pascal case value in tags - %v - key: %v, value: %v", [changeset.address, key, tags[key]])
}

deny[msg] {
    false
    msg := ""
}

The following command is used:
conftest test tfplan.json --policy [folder containing policies] --all-namespaces

If the first policy fails for multiple resources, it will make Conftest say that there are 0 passing tests. If it is disablbed (by renaming the first deny to something else for example), Conftest will show one passing test.

@jalseth jalseth added the bug Something isn't working label Aug 3, 2022
@jalseth
Copy link
Member

jalseth commented Aug 3, 2022

Thank you for the extra info, I have been able to reproduce the issue though the cause is not immediately clear.

@jalseth
Copy link
Member

jalseth commented Aug 10, 2022

I've spent a little time digging and better understand the issue now. The root of the problem is due to the nature of OPA/Rego and the fact that rules are queried by name and the results are unioned. While conftest can know that there are multiple deny rules in a given policy source, it cannot know which of those caused violations as it only queries the single data.<namespace>.deny rule and all results are merged within the OPA engine. Because a single deny[msg] can produce multiple violations (as is the case here), conftest cannot meaningfully keep track of the count of failures vs. successes. I think this is a #wontfix as it is just how OPA works at a fundamental level.

The best way forward if these counts matter to your use case is to have only one deny rule per namespace and have the namespace be scoped to a specific policy violation. I've linked the relevant parts of the code below. Let me know if you have any questions.

conftest/policy/engine.go

Lines 314 to 346 in 3aada8c

ruleQuery := fmt.Sprintf("data.%s.%s", namespace, rule)
ruleQueryResult, err := e.query(ctx, config, ruleQuery)
if err != nil {
return output.CheckResult{}, fmt.Errorf("query rule: %w", err)
}
var failures []output.Result
var warnings []output.Result
for _, ruleResult := range ruleQueryResult.Results {
// Exceptions have already been accounted for in the exception query so
// we skip them here to avoid doubling the result.
if len(exceptions) > 0 {
continue
}
if ruleResult.Passed() {
successes++
continue
}
if isFailure(rule) {
failures = append(failures, ruleResult)
} else {
warnings = append(warnings, ruleResult)
}
}
checkResult.Failures = append(checkResult.Failures, failures...)
checkResult.Warnings = append(checkResult.Warnings, warnings...)
checkResult.Exceptions = append(checkResult.Exceptions, exceptions...)
checkResult.Queries = append(checkResult.Queries, exceptionQueryResult, ruleQueryResult)

var totalFailures int
var totalExceptions int
var totalWarnings int
var totalSuccesses int
var totalSkipped int
for _, result := range results {
var indicator string
var namespace string
if result.FileName == "-" {
indicator = "-"
} else {
indicator = fmt.Sprintf("- %s", result.FileName)
}
if result.Namespace == "-" {
namespace = "-"
} else {
namespace = fmt.Sprintf("- %s -", result.Namespace)
}
totalPolicies := result.Successes + len(result.Warnings) + len(result.Failures) + len(result.Exceptions) + len(result.Skipped)
if totalPolicies == 0 {
fmt.Fprintln(s.Writer, colorizer.Colorize("?", aurora.WhiteFg), indicator, namespace, "no policies found")
continue
}
for _, warning := range result.Warnings {
fmt.Fprintln(s.Writer, colorizer.Colorize("WARN", aurora.YellowFg), indicator, namespace, warning.Message)
}
for _, failure := range result.Failures {
fmt.Fprintln(s.Writer, colorizer.Colorize("FAIL", aurora.RedFg), indicator, namespace, failure.Message)
}
if !s.SuppressExceptions {
for _, exception := range result.Exceptions {
fmt.Fprintln(s.Writer, colorizer.Colorize("EXCP", aurora.CyanFg), indicator, namespace, exception.Message)
}
}
totalFailures += len(result.Failures)
totalExceptions += len(result.Exceptions)
totalWarnings += len(result.Warnings)
totalSkipped += len(result.Skipped)
totalSuccesses += result.Successes
}
totalTests := totalFailures + totalExceptions + totalWarnings + totalSuccesses + totalSkipped

@jalseth jalseth changed the title Conftest passed tests not counted when not in 'main' namespace Failure vs successes counts are inconsistent Aug 10, 2022
@jalseth jalseth closed this as completed Oct 1, 2022
Sign up for free to join this conversation on GitHub. Already have an account? Sign in to comment
Labels
bug Something isn't working
Projects
None yet
Development

No branches or pull requests

2 participants