Skip to content
This repository has been archived by the owner on Mar 29, 2023. It is now read-only.

Zawadidone/dfir-lab

Folders and files

NameName
Last commit message
Last commit date

Latest commit

 

History

40 Commits
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 

Repository files navigation

THIS PROJECT IS ARCHIVED (see 'Known issues and limitations' before using the Terraform configuration)

pipeline status Cyberveiligheid

DFIR Lab

Blogpost - Automating DFIR using Cloud services

NOTE: Before using this project in production please read the full Terraform configuration. This project is just a proof of concept for a school assignment made using a student account with free GCP credits and a few Velociraptor clients for testing purposes.

The goal of this project is to create a DFIR Lab in the Cloud by using the elasticity, scalability and availability of Cloud services. I am a fan of GCP that's why I am using their services to deploy this lab, but this project can also be created for AWS, Azure or any other Cloud provider with a variation of Cloud services.

The lab can be used in a case where you as an Incident Responder want to analyze Plaso Timelines of Windows systems.

  1. Hunt for compromised systems using various of Velociraptor hunts (My favorite for ransomware investigations is the artifact Windows.Search.FileFinder to search for ransom notes).
  2. Acquire forensiscs artifacts of compromised systems with the Velociraptor artifact KapeFiles.Targets.
  3. Process the forensic artifacts with Plaso.
  4. Upload the timelines to Timesketch.
  5. Analyse the timelines in Timesketch.

NOTE: Steps 2, 3 and 4 are performed independently of each other for each system using GCP Pub/Sub and Cloud Functions.

In the diagram below the flow is shown:

overview

This project is inspired by:

How to use the lab

Prerequisites:

Terraform

  1. Initialize Terraform: terraform init

  2. Fill in the environments.tfvars file with the following variables:

    gcp_project  = "evident-zone-335315"
    gcp_region   = "europe-west4"
    gcp_zone   = "europe-west4-a"
    project_name = "rotterdam"
    domain_name  = "lab.zawadidone.nl"
    gcp_timesketch_machine_type_web = "c2-standard-4"
    gcp_timesketch_machine_type_worker = "c2-standard-4"
  3. Log in to GCP:

    gcloud auth application-default login

  4. Plan the Terraform configuration.

    terraform plan -var-file=environments.tfvars

  5. Apple the Terraform configuration. The provisioning of the Google-managed certificates, File store's and SQL databases can take longer than 15 minutes.

    terraform apply -var-file=environments.tfvars

  6. Set the external IP addresses used by Velociraptor and Timesketch in your DNS A records.

  7. Add the Private Service Connect id for the Elasticsearch deployment.

  8. Use the Velociraptor and Timesketch passwords to log in using the username admin.

    terraform output velociraptor_password
    terraform output timesketch_password

Because I use this project on GCP with limited credits I always destroy the configuration after developing it.

terraform destroy -var-file=environments.tfvars -auto-approve

Debug issues

If on of the compute instances doesn't work, because of a bug in the startup script. The service responsible for this can be shown like this:

sudo journalctl -u google-startup-scripts.service # show log for debugging purposes
/usr/bin/google_metadata_script_runner startup # execute startup script again
sudo docker restart timesketch-web # restart timesketch which is stuck

Timesketch

Sometimes Timesketch shows errors like shown below while uploading timelines.

[2022-03-18 14:03:19,553] timesketch.lib.sigma/ERROR None # at the start
[2022-03-17 21:16:27 +0000] [10] [ERROR] Socket error processing request. # after uploading timeline using the gui

Start the hunts

  1. Login to Timesketch and create a sketch with the ID 1.
  2. Login to Velociraptor.
  3. Deploy Velociraptor clients using the configuration and executables added to the Google Storage Bucket in the folder velociraptor-clients.
  4. Open Server Event Monitoring, select the artifact Server.Utils.BackupGCS and configure the following parameters:
  5. Configure Hunt
  6. Select Artifact Windows.KapeFiles.Targets
  7. Select the following parameters:
    • UseAutoAccessor
    • VSSAnalsyis
    • _SANS_Triage
    • DontBeLazy
  8. Specify the following Resources:
    • Max Execution Time in Seconds: 999999999
  9. Review the hunt
  10. Launch and run the hunt
  11. Wait before the Pub/Sub processing has processed the hunt collections and timelines
  12. Go to Timesketch and analyse the new timelines.

Components

The project uses the following software packages with the corresponding licenses:

Project License
Velociraptor AGPLv3
Timesketch Apache License 2.0

The current setup only supports Velociraptor with a single node setup. But is possible to add minion nodes to the frontend backend services and add the single master to the gui backend services. This way the clients connect to minions nodes (Frontend) and the analyst to the master node (GUI).

Scaling options:

  • Adjust the instance type used by Velociraptor
  • Add Velociraptor minions which can take care of the Frontend backend service by implementing multi-frontend
  • Change the Filestore tier

velociraptor-overview

Processing (Pub/Sub)

processing-overview

Scaling options:

  • Adjust the instance type used by Plaso

Timesketch

Scaling options

  • Adjust the instance types used by the Timesketch web and worker instances, Elasticsearch, PostgreSQL or Redis
  • Increase the target size of the backend services timesketch-web and timesketch-worker
  • Change the Filestore tier

timesketch-overview

Known issues and limitations

About

No description, website, or topics provided.

Resources

License

Stars

Watchers

Forks

Releases

No releases published

Packages

No packages published