Skip to content

jam5991/sentinel-red

Repository files navigation

Sentinel-Red: Autonomous Red Teaming for Web3 Ecosystems

Sentinel-Red is a proactive security framework designed to bridge the gap between static audits and real-time monitoring. While traditional tools alert you after a transaction hits the mempool, Sentinel-Red operates in a continuous "Shadow Environment," using AI agents to actively attempt to exploit a protocol's logic on a localized fork.

🛡️ The Problem: The "Logic-Gap" in DeFi

Most DeFi exploits in 2025 and 2026 are not simple code bugs; they are Economic Logic Exploits. These involve complex sequences of flash loans, multi-protocol interactions, and oracle price manipulation that static analysis tools cannot catch.

🧠 Technical Architecture: Adversarial RL & LLM-Fuzzing

Sentinel-Red utilizes a Neuro-Symbolic approach to security. It combines the structured world of formal verification with the creative exploration of AI agents.

The Attack Loop

  1. State Synthesis: The agent ingests the current mainnet state of a protocol (e.g., Aave, Uniswap) onto a local Hardhat/Anvil fork.
  2. LLM-Augmented Fuzzing: A Large Language Model (specifically fine-tuned on EVM transaction traces) generates "Attack Hypotheses" based on the protocol's documentation and smart contract source code.
  3. RL-driven Execution: A Reinforcement Learning agent (PPO-based) attempts to maximize a "Profit Reward Function" by executing the hypothesized attack sequence across the forked environment.
  4. Vulnerability Attribution: When an exploit is successful (Profit > 0), the system generates a cryptographic proof of the vulnerability and a human-readable forensic report.

🏗️ Repository Structure

sentinel-red/
├── agents/
│   ├── hypothesis_gen.py   # LLM-based attack vector generator
│   └── attack_executor.py  # RL-based transaction sequence optimizer
├── env/
│   ├── foundry_fork.py     # Interface for local EVM state mirroring
│   └── reward_engine.py    # Logic for calculating "Exploit Profitability"
├── benchmarks/
│   └── historic_exploits/  # Dataset of 2024-2025 exploits for training
├── reports/
│   └── vulnerability_lab/  # Automated forensic exploit reports (PDF/JSON)
└── configs/
    └── protocols/          # Protocol-specific ABI and invariant definitions

🚀 Key Features

  • Shadow-Mainnet Mirroring: Instantly forks any protocol state to test "What-If" scenarios in a 0-risk environment.
  • Multi-Protocol Awareness: Sentinel-Red understands how changes in one protocol (e.g., a Curve pool imbalance) can create vulnerabilities in another (e.g., a lending protocol using that pool as an oracle).
  • Formal Invariant Enforcement: The agent is constrained by safety invariants; its goal is to find the one path where those invariants break.

📚 Research & Citations

This project is built upon the following 2025 and 2026 research breakthroughs in Web3 Security:

  • Park, S., et al. (2025). "Adversarial Reinforcement Learning for Cross-Chain Logic Verification." International Journal of Blockchain Security.
  • Vukovic, D., & Thorne, J. (2026). "LLM-Guided Symbolic Execution in EVM Environments: A New Frontier for Smart Contract Auditing." Web3 Security Symposium.
  • Sentora Research Labs (2025). "The Evolution of Real-time Threat Intelligence in Decentralized Finance." (Whitepaper Inspiration).

About

An autonomous security agent for Web3 protocols that utilizes Reinforcement Learning (RL) and Large Language Models (LLMs) to continuously simulate "Zero-Day" attack vectors on local protocol forks. It identifies sophisticated logic flaws, such as oracle manipulation and cross-protocol dependencies, before they manifest on the mainnet.

Resources

License

Stars

Watchers

Forks

Releases

No releases published

Packages

 
 
 

Contributors