Guardian is a system that uses AI + Chainlink Runtime Environment (CRE) to detect critical vulnerabilities in smart contracts and trigger on-chain emergency protection.
Instead of relying only on manual audits or delayed bug bounty reviews, contracts can react automatically when a critical vulnerability is detected.
Smart contract security today has several limitations:
- Security monitoring is not continuous
- Bug bounty validation requires human reviewers
- Protocols cannot react automatically when a vulnerability is identified
Guardian introduces AI-assisted autonomous protection.
- Anyone can request an AI audit for a deployed contract
- The system analyzes the contract source code using an AI security model
- If a critical vulnerability is detected the contract automatically executes an emergency protection action
Examples of protection actions:
- pause the protocol
- disable withdrawals
- activate a circuit breaker
Guardian is model-agnostic and can work with any AI model capable of analyzing smart contracts.
For this hackathon implementation it used Google Gemini (2.5-flash).
+-------------------+
| User |
+---------+---------+
|
| askAudit()
v
+-------------------+
| GuardianHub |
| (Smart Contract)|
+---------+---------+
|
| AuditRequested event
v
+-------------------+
| CRE Workflow |
+---------+---------+
|
| Fetch verified source code
| (Etherscan)
|
| Analyze contract with AI
v
+-------------------+
| AI Result |
+---------+---------+
|
| Signed report
v
+-------------------+
| GuardianHub |
+---------+---------+
|
| takeAction()
v
+-------------------+
| Protected Contract|
+-------------------+
The system is event-driven and autonomous.
Guardian also includes a bug bounty feature.
Anyone can create a bounty for a contract using GuardianHub.
If a user discovers a critical vulnerability they can:
- Submit a proof of vulnerability
- The proof is sent to the CRE workflow using an HTTP trigger
- The AI model analyzes the proof
- If the vulnerability is confirmed:
- the bounty is paid automatically
- the contract protection can be triggered
This enables vulnerabilities to be validated programmatically without manual reviewers.
A base contract that protocols inherit to enable automated protection.
When a vulnerability is detected, the hub calls:
takeAction();The protocol defines what that action does.
Example:
function _takeAction() internal override {
paused = true;
}Central coordination contract.
Responsibilities:
- receive audit requests
- manage bug bounties
- receive CRE reports
- trigger protection actions
- pay bounty rewards
The Chainlink CRE workflow performs two main tasks.
- listens for
AuditRequestedevents - fetches verified source code from Etherscan
- sends the code to an AI model
- receives vulnerability analysis
- submits the signed result on-chain
- receives a proof submission via HTTP trigger
- analyzes the vulnerability proof using the AI model
- confirms whether the vulnerability is real
- if confirmed, submits a report to GuardianHub
- GuardianHub pays the bounty and can trigger protection
The repository includes three demo contracts.
Secure implementation.
Expected behavior:
- AI reports no critical vulnerability
- contract continues operating normally
Contains a reentrancy pattern but there aren't funds at risk. Thanks to the solidity compiler version, 0.8+, it detects underflow/overflow by default.
Expected behavior:
- AI reports no critical vulnerability
- contract continues operating normally
Contains a real vulnerability.
Expected behavior:
- AI detects the issue
- GuardianHub triggers
takeAction() - contract pauses automatically
The contracts use Foundry.
Run the test suite:
forge test
Tests cover:
- audit requests
- vulnerability detection
- protection trigger
- bounty logic
- GuardianHub: 0xEae255e1E6d37CBC2b79caf1C9beb2206fD51904
- SafeGuardianVault: 0x8cbBdEafFBc8B12bf632151ea8923598430dDF71 (No fund at risk)
- SafeGuardianVaultPost08: 0xf10f310d5Cae2232e780b7c8F4FDdc1E4b84Fd88 (No fund at risk)
- VulnerableVaultPre08: 0x53A76dB61C3E20775253b492DaE3D92CCA146123 (Fund at risk)
Install dependencies:
cd cre-workflow
bun install
Run a local simulation:
cre workflow simulate guardian
The workflow will:
- detect the audit request
- analyze the contract with the AI model
- submit the signed report
Several improvements could turn Guardian into a full security product.
Currently audits can be requested for free.
In a production environment, audits would need to be priced based on computational cost.
Possible pricing dimensions: number of contracts analyzed, code complexity, depth of analysis, AI tokens consumed.
This could evolve into a Chainlink-native security service, where users pay for automated security analysis.
The system could benefit from AI models specifically trained for smart contract security.
Future models could specialize in:
- vulnerability detection
- exploit pattern recognition
- cross-contract dependency analysis
- exploit validation for bounty submissions
Real audits rarely involve a single contract.
Future versions should support:
- defining an in-scope contract list
- analyzing cross-contract interactions
- identifying vulnerabilities involving multiple contracts
Guardian introduces an interesting property: security improves over time as AI models improve. A vulnerability that current models cannot detect today may be detectable later without requiring any changes to the deployed contracts. This effectively creates a live-upgrading security layer.
Planned improvements for the project:
- Implement and fully test the bounty workflow
- Consumer hub contract to redirect reports to different contracts (ai guardian, user bounty, ecc)
- Add a pricing mechanism for audit requests
- Support multi-contract audit scopes
- Store audit reports on IPFS with on-chain references
- Improve exploit validation for bounty submissions
- Introduce AI + human hybrid validation for bounty payouts
A realistic model would be an AI-assisted judge system where AI performs the first validation pass and human reviewers confirm the result. Such a system could even evolve into a shared judging infrastructure for external security challenges and bug bounty programs.
AI-assisted security systems introduce several considerations:
- AI models may produce false positives
- AI models may produce false negatives
- exploit validation must be handled carefully
- emergency actions must be designed to avoid abuse
Guardian demonstrates how AI + CRE infrastructure can enable a new security primitive: autonomous smart contract defense
Protocols can integrate a protection layer that reacts immediately when a critical vulnerability is detected.
This project is a hackathon prototype and not production ready.