Skip to content
New issue

Have a question about this project? Sign up for a free GitHub account to open an issue and contact its maintainers and the community.

By clicking “Sign up for GitHub”, you agree to our terms of service and privacy statement. We’ll occasionally send you account related emails.

Already on GitHub? Sign in to your account

Execute All Tests with Report #433

Closed
vdog90 opened this issue Dec 31, 2018 · 2 comments
Closed

Execute All Tests with Report #433

vdog90 opened this issue Dec 31, 2018 · 2 comments

Comments

@vdog90
Copy link

vdog90 commented Dec 31, 2018

Hi All,
Please forgive my newbie-ness, if this is not the correct forum for this idea or if the functionality I’m explaining below doesn’t already exist, please let me know. I noticed the “Generate All Tests” code on this page: https://github.com/redcanaryco/atomic-red-team/tree/master/execution-frameworks/Invoke-AtomicRedTeam

I am looking for a simple command to run many or all tests applicable by OS that creates a test report to compare to. I’m thinking there are many people like me who just want to quickly test my shiny new detection method (EDR tool, sysmon setup or SIEM integration) by running one simple command. I’m really hoping the command will output a report. The report can be then used to support a manual comparison to shiny new detection method on a wide scale of techniques post-execution. I think having this functionality easily accessible and documented will be great.

The command should:
• Run a list of tests, all tests by tactic or all tests (default).
• Insert some delay time in between each command execution, perhaps 1 minute default. This will allow for greater confidence in detection when comparing shiny new detection method to test output report.
• *Add command execution validation (Ex/ T1117 did a new instance of calc.exe execute?) Being so new to the framework, I’m assuming not all techniques pop calc.exe.
• Output a CSV or XML report of actions. Report Fields:

  • From Command/Script: Hostname, Host IP, Execution Date/Time
  • From YAML: attack_technique, display_name, name, description, input arguments, executor, name, command
  • End users can later add custom fields like “Detected”, “Detected by; SIEM, EDR, etc..”, or Maturity Level to track progress.

• Include techniques with missing tests as “No Atomic Test Available – Please submit a test! https://github.com/redcanaryco/atomic-red-team/blob/master/docs/contributing.md#how-to-contribute” in output
• Summary: Number of tests completed/successful, failed, not applicable, missing tests (by tactic and total)
• Can be user interactive or non-interactive to bypass prompts

Questions for community:

  1. What flaws are there with this concept?
  2. Does this functionality already exist?
  3. *Is there a concept of confirming a test executor’s command as success or failed? (like STERR)

I noticed the code Invoke-AtomicRedTeam\Public\Invoke-AtomicTest.ps1
has a “foreach ($technique in $AtomicTechnique)” and “foreach ($test in $technique.atomic_tests)” loops but it is not intuitive to how to run and I don’t see a report output option. See #432

Thank you!

@vdog90 vdog90 changed the title Generate All Tests with Report Execute All Tests with Report Dec 31, 2018
@ghost
Copy link

ghost commented Jan 16, 2019

Reporting is in our roadmap, some good ideas here.

@ghost ghost closed this as completed Jan 16, 2019
@LeonardoBeleffi
Copy link

Any news about this?

This issue was closed.
Sign up for free to join this conversation on GitHub. Already have an account? Sign in to comment
Labels
None yet
Projects
None yet
Development

No branches or pull requests

2 participants