Skip to content

PumpkingWok/guardian

Folders and files

NameName
Last commit message
Last commit date

Latest commit

 

History

16 Commits
 
 
 
 
 
 
 
 
 
 
 
 
 
 

Repository files navigation

Guardian

Guardian is a system that uses AI + Chainlink Runtime Environment (CRE) to detect critical vulnerabilities in smart contracts and trigger on-chain emergency protection.

Instead of relying only on manual audits or delayed bug bounty reviews, contracts can react automatically when a critical vulnerability is detected.


The Problem

Smart contract security today has several limitations:

  • Security monitoring is not continuous
  • Bug bounty validation requires human reviewers
  • Protocols cannot react automatically when a vulnerability is identified

The Idea

Guardian introduces AI-assisted autonomous protection.

  1. Anyone can request an AI audit for a deployed contract
  2. The system analyzes the contract source code using an AI security model
  3. If a critical vulnerability is detected the contract automatically executes an emergency protection action

Examples of protection actions:

  • pause the protocol
  • disable withdrawals
  • activate a circuit breaker

Guardian is model-agnostic and can work with any AI model capable of analyzing smart contracts.
For this hackathon implementation it used Google Gemini (2.5-flash).


Architecture

+-------------------+
|       User        |
+---------+---------+
          |
          | askAudit()
          v
+-------------------+
|    GuardianHub    |
|   (Smart Contract)|
+---------+---------+
          |
          | AuditRequested event
          v
+-------------------+
|    CRE Workflow   |
+---------+---------+
          |
          | Fetch verified source code
          | (Etherscan)
          |
          | Analyze contract with AI
          v
+-------------------+
|     AI Result     |
+---------+---------+
          |
          | Signed report
          v
+-------------------+
|    GuardianHub    |
+---------+---------+
          |
          | takeAction()
          v
+-------------------+
| Protected Contract|
+-------------------+

The system is event-driven and autonomous.


Bounty Mechanism

Guardian also includes a bug bounty feature.

Anyone can create a bounty for a contract using GuardianHub.

If a user discovers a critical vulnerability they can:

  1. Submit a proof of vulnerability
  2. The proof is sent to the CRE workflow using an HTTP trigger
  3. The AI model analyzes the proof
  4. If the vulnerability is confirmed:
    • the bounty is paid automatically
    • the contract protection can be triggered

This enables vulnerabilities to be validated programmatically without manual reviewers.


Components

Guardian.sol

A base contract that protocols inherit to enable automated protection.

When a vulnerability is detected, the hub calls:

takeAction();

The protocol defines what that action does.

Example:

function _takeAction() internal override {
    paused = true;
}

GuardianHub.sol

Central coordination contract.

Responsibilities:

  • receive audit requests
  • manage bug bounties
  • receive CRE reports
  • trigger protection actions
  • pay bounty rewards

CRE Workflow

The Chainlink CRE workflow performs two main tasks.

AI Audit

  1. listens for AuditRequested events
  2. fetches verified source code from Etherscan
  3. sends the code to an AI model
  4. receives vulnerability analysis
  5. submits the signed result on-chain

Bounty Validation

  1. receives a proof submission via HTTP trigger
  2. analyzes the vulnerability proof using the AI model
  3. confirms whether the vulnerability is real
  4. if confirmed, submits a report to GuardianHub
  5. GuardianHub pays the bounty and can trigger protection

Example Contracts

The repository includes three demo contracts.

SafeGuardianVault

Secure implementation.

Expected behavior:

  • AI reports no critical vulnerability
  • contract continues operating normally

SafeGuardianVaultPost08

Contains a reentrancy pattern but there aren't funds at risk. Thanks to the solidity compiler version, 0.8+, it detects underflow/overflow by default.

Expected behavior:

  • AI reports no critical vulnerability
  • contract continues operating normally

VulnerableVaultPre08

Contains a real vulnerability.

Expected behavior:

  • AI detects the issue
  • GuardianHub triggers takeAction()
  • contract pauses automatically

Running Tests

The contracts use Foundry.

Run the test suite:

forge test

Tests cover:

  • audit requests
  • vulnerability detection
  • protection trigger
  • bounty logic

Contract Deployed (Sepolia)

Running the CRE Workflow

Install dependencies:

cd cre-workflow
bun install

Run a local simulation:

cre workflow simulate guardian

The workflow will:

  1. detect the audit request
  2. analyze the contract with the AI model
  3. submit the signed report

Future Development

Several improvements could turn Guardian into a full security product.

Pricing Model

Currently audits can be requested for free.
In a production environment, audits would need to be priced based on computational cost.

Possible pricing dimensions: number of contracts analyzed, code complexity, depth of analysis, AI tokens consumed.

This could evolve into a Chainlink-native security service, where users pay for automated security analysis.


Custom Security Models

The system could benefit from AI models specifically trained for smart contract security.

Future models could specialize in:

  • vulnerability detection
  • exploit pattern recognition
  • cross-contract dependency analysis
  • exploit validation for bounty submissions

Multi-Contract Audits

Real audits rarely involve a single contract.

Future versions should support:

  • defining an in-scope contract list
  • analyzing cross-contract interactions
  • identifying vulnerabilities involving multiple contracts

Continuous Security Monitoring

Guardian introduces an interesting property: security improves over time as AI models improve. A vulnerability that current models cannot detect today may be detectable later without requiring any changes to the deployed contracts. This effectively creates a live-upgrading security layer.


Next Steps

Planned improvements for the project:

  • Implement and fully test the bounty workflow
  • Consumer hub contract to redirect reports to different contracts (ai guardian, user bounty, ecc)
  • Add a pricing mechanism for audit requests
  • Support multi-contract audit scopes
  • Store audit reports on IPFS with on-chain references
  • Improve exploit validation for bounty submissions
  • Introduce AI + human hybrid validation for bounty payouts

A realistic model would be an AI-assisted judge system where AI performs the first validation pass and human reviewers confirm the result. Such a system could even evolve into a shared judging infrastructure for external security challenges and bug bounty programs.


Security Considerations

AI-assisted security systems introduce several considerations:

  • AI models may produce false positives
  • AI models may produce false negatives
  • exploit validation must be handled carefully
  • emergency actions must be designed to avoid abuse

Why This Matters

Guardian demonstrates how AI + CRE infrastructure can enable a new security primitive: autonomous smart contract defense

Protocols can integrate a protection layer that reacts immediately when a critical vulnerability is detected.


Disclaimer

This project is a hackathon prototype and not production ready.

About

Smart contract guardian

Topics

Resources

Stars

Watchers

Forks

Releases

No releases published

Packages

 
 
 

Contributors