Skip to content

WaypointCA/ai-security-lab

Folders and files

NameName
Last commit message
Last commit date

Latest commit

 

History

1 Commit
 
 
 
 
 
 

The Free Self-Directed AI Security Mastery Path

Zero-Cost Route to Elite AI Security Expertise

License: MIT Contributions Welcome Last Updated

Platform: Optimized for Kubuntu/Ubuntu 24.04 LTS with Python virtual environments


Why This Exists

The AI security training market charges $5,000-$15,000 for courses teaching skills you can learn for free. This repository is a complete, structured learning path built from publicly available resources, open-source tools, and hands-on labs.

This path is designed for:

  • Security professionals expanding into AI/ML security
  • Penetration testers wanting to add LLM red teaming skills
  • Blue teamers building AI defense capabilities
  • Anyone with security fundamentals ready to specialize

What you'll learn:

  • Prompt injection, jailbreaking, and LLM exploitation
  • Adversarial machine learning attacks and defenses
  • RAG poisoning and agent security
  • Deepfake detection and synthetic media forensics
  • AI security governance and compliance frameworks

Time commitment: 2-3 hours daily for 36 weeks (or accelerate based on your background)


Table of Contents


Environment Setup

🚀 Quick Start (Kubuntu/Ubuntu 24.04)

Ubuntu 24.04+ enforces PEP 668 to protect system Python. We use virtual environments (the professional approach).

Option 1: Automated Setup (Recommended)

# Clone this repository
git clone https://github.com/WaypointCA/ai-security-lab.git
cd ai-security-lab

# Run the complete setup script
bash scripts/setup_ai_security_lab.sh

Option 2: Quick Manual Setup

# Install Python venv support
sudo apt install python3-full python3-venv python3-pip

# Create and activate virtual environment
cd ~/ai-security-lab
python3 -m venv venv
source venv/bin/activate

# Install packages
pip install --upgrade pip
pip install -r requirements.txt

Option 3: Minimal 2-Minute Start

bash scripts/quick_start.sh

📁 Repository Structure

ai-security-lab/
├── README.md                      # This file
├── LICENSE                        # MIT License
├── CONTRIBUTING.md                # Contribution guidelines
├── requirements.txt               # Full Python dependencies
├── requirements-minimal.txt       # Minimal dependencies to start
├── scripts/
│   ├── setup_ai_security_lab.sh  # Complete setup automation
│   ├── quick_start.sh            # 2-minute quick start
│   ├── activate.sh               # Daily environment activation
│   └── test_setup.py             # Verify installation
├── phases/
│   ├── phase1-foundation/
│   ├── phase2-llm-security/
│   ├── phase3-adversarial-ml/
│   ├── phase4-lab-setup/
│   ├── phase5-deepfakes/
│   ├── phase6-purple-team/
│   ├── phase7-real-world/
│   └── phase8-advanced/
├── tools/                         # Cloned security tools
├── labs/                          # Lab exercises by week
├── projects/                      # Your security projects
└── venv/                          # Python virtual environment (auto-created)

🔧 Daily Workflow

Start Your Day

# Using the alias (if you set it up)
ai-lab

# Or manually
cd ~/ai-security-lab
source venv/bin/activate

Verify Environment

python scripts/test_setup.py

Update Tools

pip install --upgrade -r requirements.txt

💡 Pro Setup Tips

Add Convenient Alias

echo "alias ai-lab='cd ~/ai-security-lab && source venv/bin/activate'" >> ~/.bashrc
source ~/.bashrc
# Now just type: ai-lab

VS Code Integration

  1. Open folder in VS Code: code ~/ai-security-lab
  2. Ctrl+Shift+P → "Python: Select Interpreter"
  3. Choose ./venv/bin/python
  4. VS Code auto-activates venv in terminals

Jupyter Setup

# After activating venv
pip install jupyter ipykernel
python -m ipykernel install --user --name=ai-security
# Now select "ai-security" kernel in Jupyter

Phase 1: Foundation (Weeks 1-4)

Learn Just Enough ML to Break It

Week 1-2: ML Fundamentals for Attackers

Free Learning Resources:

Hands-On Labs (Free):

Required Reading:

Week 1 Lab Exercises:

# In your activated environment
cd ~/ai-security-lab/labs/week01
jupyter lab week01_ml_basics.ipynb
  • Train a basic image classifier on CIFAR-10
  • Generate your first adversarial example using FGSM
  • Document the attack in a blog post

Week 3-4: AI Security Frameworks

Study Materials (All Free):

YouTube Channels to Follow:

Practical Exercises:

  • Map 5 recent AI breaches to MITRE ATLAS tactics
  • Create your own threat model for a hypothetical AI system
  • Join the OWASP Slack (#project-top10-for-llm channel)

Phase 2: Offensive LLM Security (Weeks 5-12)

Master Prompt Injection and Jailbreaking

Week 5-6: Prompt Injection Mastery

Free Resources:

Install and Master These Tools:

# In your activated venv
pip install garak promptfoo
git clone https://github.com/Azure/PyRIT.git tools/PyRIT
cd tools/PyRIT && pip install -e .

Tools Documentation:

Practice Targets (All Free):

Week 5-6 Lab Setup:

# Install Ollama for local testing
curl -fsSL https://ollama.com/install.sh | sh
ollama pull llama2
ollama pull mistral

# Test with garak
garak --model_type ollama --model_name llama2

Week 7-8: Advanced Jailbreaking

Research Papers (Free on arXiv):

Free Labs:

cd ~/ai-security-lab/labs/week07
python jailbreak_automation.py
  1. Set up Ollama with multiple models
  2. Test jailbreaks across models for transferability
  3. Build automated jailbreak generator using PyRIT
  4. Document 10 working jailbreaks with success rates

Week 9-10: Model Extraction and Privacy Attacks

Learning Resources:

Free API Practice:

Week 11-12: RAG and Agent Security

Build Your Lab:

# In activated venv
cd ~/ai-security-lab

# Pull models for RAG testing
ollama pull llama2
ollama pull mistral
ollama pull mixtral

# Install AnythingLLM
docker pull mintplexlabs/anythingllm
docker run -d -p 3001:3001 --name anythingllm mintplexlabs/anythingllm

Free Learning Path:

Projects to Build:

  • Poison a RAG system's knowledge base
  • Exploit agent tool-calling with malicious prompts
  • Chain attacks across multiple agents
  • Write detailed attack methodology

Phase 3: Classical Adversarial ML (Weeks 13-16)

Beyond LLMs - Computer Vision and Traditional ML

Week 13-14: Adversarial Examples

Install Frameworks in venv:

pip install adversarial-robustness-toolbox foolbox cleverhans textattack

Free Frameworks:

Learning Resources:

Build These Attacks:

# Week 13 lab template
from art.attacks.evasion import FastGradientMethod
from art.estimators.classification import PyTorchClassifier
# Your code here
  • FGSM, PGD, C&W against ImageNet models
  • Physical adversarial patches (printable)
  • Black-box attacks with limited queries
  • Transfer attacks between models

Week 15-16: Data Poisoning

Critical Research (Free):

Hands-On Experiments:

  • Poison MNIST classifier with 1% bad data
  • Implement backdoor trigger in neural network
  • Clean-label poisoning attack
  • Test defenses and bypass them

Phase 4: Building Your Security Lab (Weeks 17-20)

Professional Testing Environment at Zero Cost

Week 17-18: Local AI Security Lab

Complete Lab Stack Setup:

# After environment activation
cd ~/ai-security-lab

# Install full requirements
pip install -r requirements-full.txt

Vulnerable Apps to Deploy:

cd ~/ai-security-lab/tools
git clone https://github.com/dhammon/ai-goat.git
git clone https://github.com/DamnVulnerableLLM/DamnVulnerableLLM.git
git clone https://github.com/orcasecurity-research/AIGoat.git
git clone https://github.com/AImaginationLab/vulnerable-llms.git

Vulnerable App Links:

Week 19-20: Automation and Tooling

Build Your Own Tools:

# Template structure
cd ~/ai-security-lab/projects
mkdir prompt-injection-scanner
cd prompt-injection-scanner
touch scanner.py requirements.txt README.md
  • Prompt injection scanner (Python + Garak)
  • Automated jailbreak tester
  • Model extraction framework
  • RAG poisoning toolkit

Free Infrastructure:


Phase 5: Deepfakes and Synthetic Media (Weeks 21-22)

Understanding the $25M Threat Vector

Week 21: Generation Techniques

Free Tools:

Detection Tools (Free):

Week 22: Defense and Detection

Build These Projects:

  • Deepfake detection model using ResNet
  • Audio deepfake classifier
  • Metadata forensics toolkit
  • Corporate defense playbook

Phase 6: Purple Team Integration (Weeks 23-26)

Bridging Offense and Defense

Week 23-24: Blue Team Defenses

Free Defensive Tools:

SIEM Integration (Free):

Week 25-26: Compliance and Governance

Free Compliance Resources:

Build Compliance Artifacts:

  • AI Risk Register template
  • Assessment questionnaires
  • Governance framework document
  • Vendor assessment checklist

Phase 7: Real-World Practice (Weeks 27-30)

Building Your Portfolio

Week 27-28: Bug Bounties and CTFs

Free Practice Platforms:

Strategy for Success:

  1. Start with programs explicitly mentioning AI/LLM
  2. Focus on prompt injection and data leakage initially
  3. Document everything for portfolio
  4. Aim for 5-10 valid submissions

Week 29-30: Open Source Contributions

High-Value Contribution Targets:

Portfolio Projects:

  • Security assessment of popular open-source AI project
  • White paper on emerging AI threat
  • Tool release on GitHub
  • Blog series (10+ technical posts)

Phase 8: Advanced Specialization (Weeks 31-36)

Becoming the Expert

Week 31-32: Nation-State TTPs

Intelligence Sources (Free):

Analysis Framework:

  • Track 5 APT groups using AI
  • Map their TTPs to MITRE ATLAS
  • Build detection strategies
  • Publish threat brief

Week 33-34: Cutting-Edge Research

Stay Current With:

Twitter/X Accounts to Follow:

  • Nicholas Carlini (@nicholas_carlini)
  • Simon Willison (@simonw)
  • Kai Greshake (@KGreshake)
  • Rich Harang (@rharang)
  • Will Pearce (@moo_hax)

Week 35-36: Capstone Project Options

Option 1: Security Tool Development

  • Build comprehensive AI security scanner
  • 1000+ stars on GitHub as goal
  • Full documentation and examples
  • Conference talk submission

Option 2: Major Vulnerability Research

  • Find significant vulnerability in popular AI system
  • Responsible disclosure
  • Detailed write-up
  • Conference presentation

Option 3: Defensive Framework

  • Complete security framework for specific industry
  • Open-source release
  • Implementation guides
  • Community adoption

Essential Resources

Documentation and Frameworks

Learning Platforms (Free)

Community and Forums

Free Compute Resources


Progress Tracking

6-Month Milestones

Technical Skills

  • Execute 10 different attack types
  • Build 5 security tools
  • Find 10 valid vulnerabilities
  • Complete 20 CTF challenges
  • Contribute to 3 major projects

Portfolio Development

  • 20+ technical blog posts
  • 1000+ GitHub commits
  • 5+ detailed vulnerability write-ups
  • 3+ conference talk proposals
  • 10+ tool demonstrations

Professional Growth

  • 500+ LinkedIn connections in AI security
  • 3+ consulting inquiries received
  • 1+ paid engagement completed
  • Known expert in 3+ communities

Weekly Time Commitment

Daily (2-3 hours)

  • Morning (30 min): Read one research paper, review security news
  • Lunch (30 min): Watch one conference talk, practice one CTF challenge
  • Evening (1-2 hours): Hands-on lab work, tool development, blog writing

Weekend (4-6 hours):

  • Deep dive into new attack technique
  • Build/improve security tool
  • Write comprehensive blog post
  • Participate in CTF or bug bounty

Requirements Files

requirements.txt (Core Dependencies)

# Essential AI Security Tools
garak
promptfoo
adversarial-robustness-toolbox
foolbox
textattack

# Core ML Libraries
torch
transformers
tensorflow
numpy
pandas
scikit-learn

# LLM Tools
langchain
openai
anthropic

# Development
jupyter
streamlit
gradio

requirements-minimal.txt (Quick Start)

numpy
pandas
requests
jupyter
notebook

See requirements-full.txt for complete dependencies including all frameworks and tools.


Contributing

We welcome contributions! See CONTRIBUTING.md for guidelines.

Ways to contribute:

  • Add new resources you discover
  • Update links that have changed
  • Share lab solutions (ethically)
  • Contribute tools back to the community
  • Report issues or suggest improvements

License

This project is licensed under the MIT License - see the LICENSE file for details.


About

This learning path was compiled and is maintained by Waypoint Compliance Advisory LLC, a Service-Disabled Veteran-Owned Small Business (SDVOSB) specializing in cybersecurity consulting, CMMC compliance, and AI security assessments.

Contact: info@waypointcompliance.com


The knowledge is free. The tools are free. The communities are free. Your only investment is time and determination.

Last Updated: January 2026

Releases

No releases published

Packages

No packages published