Skip to content

st-mn/autobook

Folders and files

NameName
Last commit message
Last commit date

Latest commit

 

History

1 Commit
 
 
 
 

Repository files navigation

🛡️ Autobook - ML Security Incident Response Runbooks Automation

Executable Jupyter notebook runbooks for detecting, investigating, and responding to security incidents in ML/AI infrastructure.

📋 Overview

This repository contains Security incident response runbooks designed for security operations teams protecting ML systems. Each runbook is an interactive Jupyter notebook that guides security engineers through the complete incident response lifecycle.

Key Features

  • Executable Investigation - Run analysis code directly in notebooks
  • GCP Integration - Built-in support for Cloud Logging, Vertex AI, BigQuery, KMS
  • MITRE Mapping - Aligned to ATT&CK and ATLAS frameworks
  • Evidence Collection - SHA-256 hashed evidence packages for forensics
  • Approval Gates - Containment actions require explicit approval

📚 Runbooks

# Runbook Severity MITRE Reference
01 ML Model Exfiltration from GCS 🔴 HIGH T1530, T1567.002
02 ML Training Job Manipulation 🔴 CRITICAL T1565.001
03 ML Inference API Abuse 🔴 HIGH AML.T0024
04 K8s ML Workload Compromise 🔴 CRITICAL T1611
05 ML Pipeline Code Injection 🔴 HIGH T1195.002
06 AR Device Firmware Tampering 🔴 CRITICAL T1195.002, T1542
07 ML Training Data Poisoning 🔴 HIGH AML.T0020
08 ML Model Registry Tampering 🔴 CRITICAL T1195.002, T1565.001
09 Adversarial ML Input Detection 🟡 MEDIUM AML.T0015, AML.T0016
10 ML Secrets Exposure 🔴 HIGH T1552.001, T1552.004

🏗️ Runbook Structure

Each runbook follows a standardized 7-section incident response workflow:

1️⃣ Initial Triage & Alert Validation
   └── Configure alert parameters, validate detection

2️⃣ Scope Assessment
   └── Identify all affected resources and blast radius

3️⃣ Detailed Analysis
   └── Deep investigation with audit log queries

4️⃣ Impact Assessment
   └── Determine business impact and risk level

5️⃣ Containment Actions ⚠️
   └── Approval-gated response actions

6️⃣ Evidence Collection
   └── Forensic evidence with integrity hashing

7️⃣ Remediation Checklist
   └── Recovery procedures and hardening steps

🚀 Quick Start

Prerequisites

# Required Python packages
pip install google-cloud-storage google-cloud-logging google-cloud-aiplatform \
            google-cloud-bigquery google-cloud-kms pandas numpy

GCP Authentication

# Authenticate with GCP
gcloud auth application-default login

# Set project
gcloud config set project YOUR_PROJECT_ID

Using a Runbook

  1. Open the relevant runbook in Jupyter/VS Code
  2. Update the Configuration cell with alert details:
    ALERT_TIMESTAMP = "2024-01-15T10:30:00Z"
    PROJECT_ID = "your-gcp-project-id"
    # ... other alert-specific parameters
  3. Execute cells sequentially through investigation phases
  4. Set CONTAINMENT_APPROVED = True only after manager approval
  5. Export evidence package to secure storage

🔧 Configuration

Required GCP APIs

Enable these APIs in your project:

  • Cloud Logging API
  • Cloud Storage API
  • Vertex AI API
  • BigQuery API
  • Cloud KMS API
  • IAM API

Required IAM Roles

The service account running these runbooks needs:

  • roles/logging.viewer - Read audit logs
  • roles/storage.objectViewer - Access GCS buckets
  • roles/aiplatform.viewer - View Vertex AI resources
  • roles/bigquery.dataViewer - Query BigQuery
  • roles/iam.securityReviewer - Review IAM policies

Evidence Storage

Create an evidence bucket with restricted access:

gsutil mb -l us-central1 gs://security-incident-evidence
gsutil iam ch serviceAccount:YOUR_SA@PROJECT.iam.gserviceaccount.com:objectCreator \
    gs://security-incident-evidence

📊 MITRE Framework Mapping

ATT&CK Techniques Covered

Technique ID Name Runbooks
T1530 Data from Cloud Storage 01
T1565.001 Stored Data Manipulation 02, 08
T1611 Escape to Host 04
T1195.002 Compromise Software Supply Chain 05, 06, 08
T1542 Pre-OS Boot 06
T1552.001 Credentials In Files 10
T1552.004 Private Keys 10
T1567.002 Exfiltration to Cloud Storage 01

ATLAS Techniques Covered

Technique ID Name Runbooks
AML.T0015 Evade ML Model 09
AML.T0016 Obtain Capabilities 09
AML.T0020 Poison Training Data 07
AML.T0024 Exfiltration via ML Inference API 03

🔗 References


⚠️ Disclaimer: These runbooks are templates. Customize containment actions and thresholds for your environment before production use.

About

Automated ML Security Incident Response Runbooks - Executable Jupyter notebooks for detecting, investigating, and responding to security incidents in ML/AI infrastructure

Resources

Stars

Watchers

Forks

Releases

No releases published

Packages

 
 
 

Contributors