Skip to content
New issue

Have a question about this project? Sign up for a free GitHub account to open an issue and contact its maintainers and the community.

By clicking “Sign up for GitHub”, you agree to our terms of service and privacy statement. We’ll occasionally send you account related emails.

Already on GitHub? Sign in to your account

CLI: Early abort of testing on some rule constellations #3850

Closed
1 of 3 tasks
Gellardo opened this issue Sep 13, 2021 · 3 comments
Closed
1 of 3 tasks

CLI: Early abort of testing on some rule constellations #3850

Gellardo opened this issue Sep 13, 2021 · 3 comments
Labels
feature:cli-output stale needs prioritization; label applied by stalebot

Comments

@Gellardo
Copy link

Gellardo commented Sep 13, 2021

Describe the bug
When testing my rules with semgrep --test . there are several issues that make it really inconvenient.
All of them abort testing the rules in the current directory early and give correct but not-easy to understand error messages.

To Reproduce
https://gist.github.com/Gellardo/340ab5d0b221d2b918ecf6b5757e5b5d
In the gist, there are 3 different setups that produce an error message. Since each issue aborts testing early, you have to put each setup into its own directory.

  • rule1 has a ruleid assertion in the test file which does not match and a second rule that does. This routinely happens during my development and should be marked as a False negative. Instead, the CLI prints something about rule id mismatches, even though the problem goes away once one adds an additional assertion/changes the rule so that a-finding matches something:
$ semgrep --test rule1
found rule id mismatch - file=/$DIR/rule1/rule1.json results={'a-finding'} expected={'a-finding', 'no-finding'}
failing due to rule id mismatch
  • rule2 is the same setup as rule1, just with a ok assertion. Therefore the test (while not helpful) should succeed in my opinion but prints:
$ semgrep --test rule2
found rule id mismatch - file=/$DIR/rule2/rule2.json results={'a-finding'} expected={'no-TP-assertion', 'a-finding'}
failing due to rule id mismatch
  • rule3 contains a deliberate false positive. This results in the following instead of a FP entry in the matrix:
$ semgrep --test rule3
found false positives on ok'ed lines - file=/$DIR/rule3.json fps={2}
failing due to false positives

Expected behavior
All the examples above should not result in exceptional behavior on assertion failures. Instead, the offending lines should be part of the final confusion matrix. The whole point of testing my rules is that I get an overview of what is working as intended and what is not.

Only point that I'm not quite sure of: rule1/2 look like they are trying to handle assertions in the test file that have no corresponding rule in the yaml file. Not sure what should happen UX wise if there are assertions for rules that are not in the rules file.

What is the priority of the bug to you?

  • P0: blocking your adoption of Semgrep or workflow
  • P1: important to fix or quite annoying
  • P2: regular bug that should get fixed

Environment
Using MacOS, tested with both Homebrew and pip: semgrep 0.64.0

@nbrahms
Copy link
Contributor

nbrahms commented Sep 14, 2021

Tracked in https://linear.app/r2c/issue/PA-326

@stale
Copy link

stale bot commented Dec 14, 2021

This issue is being marked stale because there hasn't been any activity in 30 days. Please leave a comment if you think this issue is still relevant and should be prioritized, otherwise it will be automatically closed in 7 days (you can always reopen it later).

@stale stale bot added the stale needs prioritization; label applied by stalebot label Dec 14, 2021
@stale
Copy link

stale bot commented Dec 21, 2021

Stale-bot has closed this stale item. Please reopen it if this is in error.

@stale stale bot closed this as completed Dec 21, 2021
Sign up for free to join this conversation on GitHub. Already have an account? Sign in to comment
Labels
feature:cli-output stale needs prioritization; label applied by stalebot
Development

No branches or pull requests

3 participants