-
Notifications
You must be signed in to change notification settings - Fork 41
Fix manual test bug #97
New issue
Have a question about this project? Sign up for a free GitHub account to open an issue and contact its maintainers and the community.
By clicking “Sign up for GitHub”, you agree to our terms of service and privacy statement. We’ll occasionally send you account related emails.
Already on GitHub? Sign in to your account
Changes from all commits
391ae07
93d3088
264ed62
f5ac97c
eb80f20
ac31be0
89ce6b3
9620532
85c3fdf
205c1f6
7fc6e05
File filter
Filter by extension
Conversations
Jump to
Diff view
Diff view
There are no files selected for viewing
Original file line number | Diff line number | Diff line change |
---|---|---|
|
@@ -84,6 +84,7 @@ def getSummaryObject( | |
tested_detections = [] | ||
total_pass = 0 | ||
total_fail = 0 | ||
total_skipped = 0 | ||
|
||
# Iterate the detections tested (anything in the output queue was tested) | ||
for detection in self.sync_obj.outputQueue: | ||
|
@@ -93,16 +94,27 @@ def getSummaryObject( | |
) | ||
|
||
# Aggregate detection pass/fail metrics | ||
if summary["success"] is True: | ||
total_pass += 1 | ||
else: | ||
if summary["success"] is False: | ||
total_fail += 1 | ||
else: | ||
#Test is marked as a success, but we need to determine if there were skipped unit tests | ||
#SKIPPED tests still show a success in this field, but we want to count them differently | ||
pass_increment = 1 | ||
for test in summary.get("tests"): | ||
if test.get("test_type") == "unit" and test.get("status") == "skip": | ||
total_skipped += 1 | ||
#Test should not count as a pass, so do not increment the count | ||
pass_increment = 0 | ||
break | ||
total_pass += pass_increment | ||
|
||
|
||
# Append to our list | ||
tested_detections.append(summary) | ||
|
||
# Sort s.t. all failures appear first (then by name) | ||
tested_detections.sort(key=lambda x: (x["success"], x["name"])) | ||
#Second short condition is a hack to get detections with unit skipped tests to appear above pass tests | ||
There was a problem hiding this comment. Choose a reason for hiding this commentThe reason will be displayed to describe this comment to others. Learn more. Can I ask why you want skip above pass? There was a problem hiding this comment. Choose a reason for hiding this commentThe reason will be displayed to describe this comment to others. Learn more. The results file can be, and often is, REALLY large. I think it makes sense for errors/failures at the top, as these are the ones that deserve the most attention (and what people are likely to be looking for when they open this file). I personally think "pass" is the least interesting since it means they are working as expected. Definitely a personal opinion. |
||
tested_detections.sort(key=lambda x: (x["success"], 0 if x.get("tests",[{}])[0].get("status","status_missing")=="skip" else 1, x["name"])) | ||
|
||
# Aggregate summaries for the untested detections (anything still in the input queue was untested) | ||
total_untested = len(self.sync_obj.inputQueue) | ||
|
@@ -128,14 +140,15 @@ def getSummaryObject( | |
overall_success = False | ||
|
||
# Compute total detections | ||
total_detections = total_fail + total_pass + total_untested | ||
total_detections = total_fail + total_pass + total_untested + total_skipped | ||
|
||
|
||
# Compute the percentage of completion for testing, as well as the success rate | ||
percent_complete = Utils.getPercent( | ||
len(tested_detections), len(untested_detections), 1 | ||
) | ||
success_rate = Utils.getPercent( | ||
total_pass, total_detections, 1 | ||
total_pass, total_detections-total_skipped, 1 | ||
There was a problem hiding this comment. Choose a reason for hiding this commentThe reason will be displayed to describe this comment to others. Learn more. This is a non-blocking comment and a reiteration of a conversation in slack; just including it for good book-keeping I think future work could break down another metric as I think this is just good data to have, but this also serves a request from Javier around having a metics for test coverage of all detections delivered in ESCU (which would include experimental and deprecated) There was a problem hiding this comment. Choose a reason for hiding this commentThe reason will be displayed to describe this comment to others. Learn more. Yes, I think this type of additional breakdown is a good idea.
|
||
) | ||
|
||
# TODO (cmcginley): add stats around total test cases and unit/integration test | ||
|
@@ -149,6 +162,7 @@ def getSummaryObject( | |
"total_detections": total_detections, | ||
"total_pass": total_pass, | ||
"total_fail": total_fail, | ||
"total_skipped": total_skipped, | ||
"total_untested": total_untested, | ||
"total_experimental_or_deprecated": len(deprecated_detections+experimental_detections), | ||
"success_rate": success_rate, | ||
|
Original file line number | Diff line number | Diff line change |
---|---|---|
|
@@ -19,7 +19,7 @@ | |
from contentctl.objects.playbook import Playbook | ||
from contentctl.helper.link_validator import LinkValidator | ||
from contentctl.objects.enums import SecurityContentType | ||
|
||
from contentctl.objects.test_group import TestGroup | ||
|
||
class Detection_Abstract(SecurityContentObject): | ||
# contentType: SecurityContentType = SecurityContentType.detections | ||
|
@@ -60,6 +60,34 @@ class Detection_Abstract(SecurityContentObject): | |
class Config: | ||
use_enum_values = True | ||
|
||
|
||
# A list of groups of tests, relying on the same data | ||
test_groups: Union[list[TestGroup], None] = None | ||
There was a problem hiding this comment. Choose a reason for hiding this commentThe reason will be displayed to describe this comment to others. Learn more. I see you migrated this code from Detection Can you explain the intention behind having separate classes here (detection and the abstract)? I'm fine w/ this code living in either class, but I chose Detection bc I thought the spirit of the abstract class was to be a more faithful representation of the model as it exists in YAML, and since the So tl;dr I'm totally fine with this, and no change required, just curious, and wondering if we don't have a need for two separate classes, perhaps we could collapse them into one in the future There was a problem hiding this comment. Choose a reason for hiding this commentThe reason will be displayed to describe this comment to others. Learn more. Great question! Many contentctl users want to change content somehow (and detections are the content they are most-often interested in changing). These changes could be, but are not limited to:
However, if they change Detection_Abstract and WE change Detection_Abstract, it has a high probability of creating painful merge conflict issues. As such, the plan was to retain control of Detection_Abstract and ship all our out-of-the-box functionality in that file. Then if customers want their own functionality, they can add it to Detection.py. |
||
|
||
@validator("test_groups", always=True) | ||
def validate_test_groups(cls, value, values) -> Union[list[TestGroup], None]: | ||
""" | ||
Validates the `test_groups` field and constructs the model from the list of unit tests | ||
if no explicit construct was provided | ||
:param value: the value of the field `test_groups` | ||
:param values: a dict of the other fields in the Detection model | ||
""" | ||
# if the value was not the None default, do nothing | ||
if value is not None: | ||
return value | ||
|
||
# iterate over the unit tests and create a TestGroup (and as a result, an IntegrationTest) for each | ||
test_groups: list[TestGroup] = [] | ||
for unit_test in values["tests"]: | ||
test_group = TestGroup.derive_from_unit_test(unit_test, values["name"]) | ||
test_groups.append(test_group) | ||
|
||
# now add each integration test to the list of tests | ||
for test_group in test_groups: | ||
values["tests"].append(test_group.integration_test) | ||
return test_groups | ||
|
||
|
||
def get_content_dependencies(self) -> list[SecurityContentObject]: | ||
return self.playbooks + self.baselines + self.macros + self.lookups | ||
|
||
|
@@ -194,16 +222,39 @@ def search_obsersables_exist_validate(cls, v, values): | |
# Found everything | ||
return v | ||
|
||
# TODO (cmcginley): Fix detection_abstract.tests_validate so that it surfaces validation errors | ||
# (e.g. a lack of tests) to the final results, instead of just showing a failed detection w/ | ||
# no tests (maybe have a message propagated at the detection level? do a separate coverage | ||
# check as part of validation?): | ||
@validator("tests") | ||
@validator("tests", always=True) | ||
def tests_validate(cls, v, values): | ||
if values.get("status", "") == DetectionStatus.production.value and not v: | ||
raise ValueError( | ||
"At least one test is REQUIRED for production detection: " + values["name"] | ||
) | ||
# TODO (cmcginley): Fix detection_abstract.tests_validate so that it surfaces validation errors | ||
# (e.g. a lack of tests) to the final results, instead of just showing a failed detection w/ | ||
# no tests (maybe have a message propagated at the detection level? do a separate coverage | ||
# check as part of validation?): | ||
|
||
|
||
#Only production analytics require tests | ||
if values.get("status","") != DetectionStatus.production.value: | ||
return v | ||
|
||
# All types EXCEPT Correlation MUST have test(s). Any other type, including newly defined types, requires them. | ||
# Accordingly, we do not need to do additional checks if the type is Correlation | ||
if values.get("type","") in set([AnalyticsType.Correlation.value]): | ||
return v | ||
|
||
|
||
# Ensure that there is at least 1 test | ||
if len(v) == 0: | ||
if values.get("tags",None) and values.get("tags").manual_test is not None: | ||
# Detections that are manual_test MAY have detections, but it is not required. If they | ||
# do not have one, then create one which will be a placeholder. | ||
# Note that this fake UnitTest (and by extension, Integration Test) will NOT be generated | ||
# if there ARE test(s) defined for a Detection. | ||
placeholder_test = UnitTest(name="PLACEHOLDER FOR DETECTION TAGGED MANUAL_TEST WITH NO TESTS SPECIFIED IN YML FILE", attack_data=[]) | ||
return [placeholder_test] | ||
|
||
else: | ||
raise ValueError("At least one test is REQUIRED for production detection: " + values.get("name", "NO NAME FOUND")) | ||
|
||
|
||
#No issues - at least one test provided for production type requiring testing | ||
return v | ||
|
||
@validator("datamodel") | ||
|
There was a problem hiding this comment.
Choose a reason for hiding this comment
The reason will be displayed to describe this comment to others. Learn more.
I think, and I don't need this to be blocking, but I'd like to make detection state resolution (e.g. is it considered a skip/fail/error/pass) a property of the detection itself
E.g. define a property function
status
which iterates over testsThere was a problem hiding this comment.
Choose a reason for hiding this comment
The reason will be displayed to describe this comment to others. Learn more.
💯
I think we should make this a pydantic computed_property. On top of that, I would add an additional option to this set of values (which is the default value) along the lines of NOT_YET_TESTED.
This way, when we are rendering the current state of all tests, like in a user interface, we can easily differentiate /sort all sorts of different tests.