-
Notifications
You must be signed in to change notification settings - Fork 333
Getting started
At this section you will find instructions on how to start using the framework.
The Python tool supports two modes:
Besides a few optional arguments, DeTT&CT has five modes which are described in the help text below. Please note that each mode has a dedicated help function. For example, the help function for group
can be shown using the following command: python dettect.py group -h
. An overview of all help texts can be found here.
usage: dettect.py [-h] [--version] [-i] ...
Detect Tactics, Techniques & Combat Threats
optional arguments:
-h, --help show this help message and exit
--version show program's version number and exit
-i, --interactive launch the interactive menu, which has support for all
modes
MODE:
Select the mode to use. Every mode has its own arguments and help info
displayed using: {visibility, detection, group, generic} --help
datasource (ds) data source mapping and quality
visibility (v) visibility coverage mapping based on techniques and data
sources
detection (d) detection coverage mapping based on techniques
group (g) threat actor group mapping
generic (ge) includes: statistics on ATT&CK data source and updates on
techniques, groups and software
When using the interactive mode, a menu will be shown that allows you to browse through all functionality interactively.
-= DeTT&CT =-
-- Detect Tactics, Techniques & Combat Threats --
version 1.2.1
Select a mode:
1. Data source mapping
2. Visibility coverage mapping
3. Detection coverage mapping
4. Threat actor group mapping
5. Updates
6. Statistics
9. Quit
>>
The terms data source, visibility and detection are used extensively within DeTT&CT. Therefore it is essential to understand the meaning and the difference between those terms.
Data sources are the raw logs or events generated by systems, security appliances, network devices, etc. ATT&CK has defined around 50 different types of data sources (e.g. Process monitoring and Web proxy), which we included in DeTT&CT. These data sources are administrated within the data source administration YAML file. For each data source, the data quality can be scored. Within ATT&CK these data sources are listed within the techniques itself (e.g. T1003 in the upper right block).
Visibility is used within DeTT&CT to indicate if you have sufficient data sources with sufficient quality available, to be able to see traces of ATT&CK techniques. Visibility is necessary to have, to perform incident response, execute hunting investigations and build detections. Within DeTT&CT you can score the visibility coverage per ATT&CK technique. More on how and why is explained here. The visibility scores are administrated in the technique administration YAML file.
When you have the right data sources with sufficient data quality, and when it is available to you for data analytics, then your visibility can be used to create new detections for ATT&CK techniques. Detections which often trigger alerts and are hence followed-up on by your blue team. Scoring and administrating your detections is done in the technique administration YAML file.
Below are some examples to further explain how to use the framework. We also wrote a blog with an introduction on ATT&CK and how to get started with DeTT&CT.
Two general comments we would like to make:
- Use the tool in the way it works best for you. For example, scoring every single technique within the ATT&CK Matrix can be a lot of work. Therefore you may only score what you know at that time and what you want to communicate with others or want to verify/compare.
- It is recommended to periodically have a good look at your data source and techniques administration to see if anything has changed during the recent time, and therefore need to be updated. It can be useful to draw up a checklist for this, which you can then repeated after X time has passed.
Content:
- Add data sources and score data quality
- Map data sources
- Score visibility
- Visibility coverage
- Auto-update visibility scores and the use of the
score_logbook
- Score detection and determine your detection coverage
- Threat actor group heat map
- Compare group or red team exercise with detection/visibility coverage
- Compare visibility and detection coverage
- Which data source are covering the most techniques?
- EQL - exclude/include YAML objects
Start with adding data sources and scoring the quality within a data source administration YAML file. An example data source administration YAML file can be found here, and an empty data source administration file can be found here. Both can be used as a template to get started. Filling in your data sources and scoring them will, later on, be very useful in scoring visibility. More on scoring data quality can be found here: Data sources.
Based on the YAML file you can generate an Excel sheet containing all your data sources, attributes, notes and data quality scores:
python dettect.py ds -fd sample-data/data-sources-endpoints.yaml -e
![DeTT&CT - Data quality](images/data_sources_quality.png)
Generate an ATT&CK Navigator layer file based on data sources recorded in the YAML file. Based on the amount of data sources, techniques are mapped and visualised in the layer file. This gives you a rough overview of your visibility coverage. Often, this is the first step in getting an overview of your actual visibility coverage.
python dettect.py ds -fd sample-data/data-sources-endpoints.yaml -l
![DeTT&CT - Data sources](images/example_data_sources.png)
A next step can be to determine the exact visibility per technique. To help you with this, you can generate a techniques administration YAML file based on your data source administration, which will provide you with rough visibility scores:
python dettect.py ds -fd sample-data/data-sources-endpoints.yaml -y
Within the resulting YAML file, you can choose to adjust the visibility score per technique based on expert knowledge and the previously defined quality of your data sources (in this same YAML file you can also score detection). There are several reasons why manual scoring can be required. For example:
- You may have 1 data source available from the total 3 data sources mentioned within a particular ATT&CK technique. However, in some cases that single data source could not be sufficient for detection of that technique. And hence, the visibility score based on the number of data sources needs to be adjusted.
- The quality of a particular data source is considered too low to be useful for visibility.
- With the power of an EQL query, you can influence which data sources are included in the process of auto-generating visibility scores. For example, to exclude data sources with low data quality. For more info see: Customize the rough visibility score.
- You do have a certain level of visibility on a technique. But this is based on a data source currently not mentioned within MITRE ATT&CK for that particular technique.
Visibility scores are rated from 0 to 4. The explanation of the scores can be found here: visibility scores. Use the score that fits best. It is possible to have multiple scores per technique that apply to different kind of systems using the applicable_to
property. In addition, you can keep track of changes in the scores by having multiple score
objects within a score_logbook
Generate an ATT&CK Navigator layer file based on the technique administration in the YAML file. The visibility scores defined in the YAML file are also used to colour the techniques in the layer file. This gives you an overview of your visibility coverage:
python dettect.py v -ft sample-data/techniques-administration-endpoints.yaml -fd sample-data/data-sources-endpoints.yaml -l
![DeTT&CT - Visibility coverage](images/example_visibility.png)
The below is purely hypothetical to explain the effect of adding a data source to your data source administration file and the concept of the score_logbook
.
Because we added the data source "Process use of Network" on 2019-07-30 within the data source administration file, we gained more visibility. We can then choose to automatically update our rough visibility scores in our technique administration YAML file using the comment below. Note that manually assigned visibility scores will not be overwritten without your approval and backups are created. Among others, there is also an option to compare every visibility score eligible for an update and then approve or reject the update.
python dettect.py ds -ft sample-data/techniques-administration-endpoints.yaml -fd sample-data/data-sources-endpoints.yaml --update
When we, after the update, have a look at for example the ATT&CK technique T1189/Drive-by Compromise in the sample technique administration file, the rough visibility score increased from 2 to level 3. This change in visibility is recorded in the score_logbook
within a new score
object.
Also, this gain in visibility allowed us (again hypothetical) to improve our detection for that technique and hence increase the detection score from 1 to level 3. This chance is also recorded in a score
object. See below:
- technique_id: T1189
technique_name: Drive-by Compromise
detection:
applicable_to: [all]
location: [SIEM UC 123, Tool Model Y]
comment: ''
score_logbook:
- date: 2019-08-05
score: 3
comment: This detection was improved due to the availability of the new log source Process use of network
- date: 2018-11-01
score: 1
comment: ''
visibility:
applicable_to: [all]
comment: ''
score_logbook:
- date: 2019-07-30
score: 2
comment: 'New data source: Process use of network'
auto_generated: true
- date: 2019-03-01
score: 1
comment: ''
auto_generated: true
Another use-case for the auto-update is when MITRE ATT&CK introduced new techniques, makes changes in the data source listed for a technique or introduces new data sources.
Start with manually determining your detection score per technique in the technique administration YAML file. Detection scores are rated from -1 to 5. The explanation of the scores can be found here: detection scores. Use the score that fits best. It is possible to have multiple scores per technique that apply to different kind of systems using the applicable_to
property. In addition, you can keep track of changes in the scores by having multiple score
objects within a score_logbook
A next step can be to generate an ATT&CK Navigator layer file based on your scores you have determined per technique in the YAML administration file. The detection scores in the YAML file are also used to colour the techniques in the layer file. This gives you an overview of your detection coverage:
python dettect.py d -ft sample-data/techniques-administration-endpoints.yaml -l
![DeTT&CT - Detection coverage](images/example_detection.png)
Generate an ATT&CK Navigator layer file based on threat actor group data in ATT&CK. Or your threat actor data stored in a YAML file.
The below-generated layer file contains a heat map based on all threat actor data within ATT&CK. The darker the colour in the heat map, the more often the technique is being used among groups. Please note that like all data, there is bias. As very well explained by MITRE: Building an ATT&CK Sightings Ecosystem.
python dettect.py g
![DeTT&CT - Groups heat map](images/example_groups.png)
It is also possible to create a heat map based on a subset of groups present in ATT&CK:
python dettect.py g -g 'fin7, cobalt group'
Or based on threat actor data you store in a YAML group administration file:
python dettect.py g -g sample-data/groups.yaml
![DeTT&CT - Red team heat map](images/example_group_red_team.png)
Read the help for group
on all available functionality. Including how threat actor groups can be compared: python dettect.py g -h
A groups YAML file with either data on a red team exercise or a specific threat actor group can be compared with your detection or visibility. DeTT&CT can generate an ATT&CK Navigator layer file in which the differences are visually shown with a legend explaining the colours.
python dettect.py g -g sample-data/groups.yaml -o sample-data/techniques-administration-endpoints.yaml -t detection
![DeTT&CT - Compare red team with detection](images/example_group_red_team_overlay_detection.png)
It is possible to compare your visibility and detection coverage in one ATT&CK Navigator layer file. This will give you insight in where you have visibility, detection and both.
python dettect.py d -ft sample-data/techniques-administration-endpoints.yaml -fd sample-data/data-sources-endpoints.yaml -o
# or:
python dettect.py v -ft sample-data/techniques-administration-endpoints.yaml -fd sample-data/data-sources-endpoints.yaml -o
Using the command python dettect.py generic --statistics
we can determine which data sources within ATT&CK are covering the most number of techniques:
Count Data Source
--------------------------------------------------
169 Process monitoring
97 Process command-line parameters
97 File monitoring
43 API monitoring
39 Process use of network
36 Packet capture
36 Windows Registry
28 Authentication logs
27 Netflow/Enclave netflow
22 Network protocol analysis
22 Windows event logs
18 DLL monitoring
18 Binary file metadata
13 Loaded DLLs
9 SSL/TLS inspection
9 Network intrusion detection system
9 System calls
9 Malware reverse engineering
8 Network device logs
7 Kernel drivers
7 Anti-virus
6 Application logs
6 Data loss prevention
4 Web logs
4 Services
4 PowerShell logs
4 Email gateway
4 Web proxy
4 Windows Error Reporting
4 User interface
4 Host network interface
3 Web application firewall logs
3 BIOS
3 MBR
3 Third-party application logs
2 Sensor health and status
2 Component firmware
2 DNS records
2 Detonation chamber
2 Mail server
2 Environment variable
1 Asset management
1 Browser extensions
1 Access tokens
1 Digital certificate logs
1 Disk forensics
1 WMI Objects
1 VBR
1 Named Pipes
1 EFI
- Home
- Introduction
- Installation and requirements
- Getting started / How to
- Changelog
- Future developments
- ICS - Inconsistencies
- Introduction
- DeTT&CT data sources
- Data sources per platform
- Data quality
- Scoring data quality
- Improvement graph