An AI-powered security analysis platform that identifies logic flaws and vulnerabilities in code, specifically designed for AI-generated code.
# Clone the repository
git clone https://github.com/aisec/platform.git
cd platform
# Option 1: Docker Compose (Recommended)
cp .env.example .env
# Edit .env with your API keys
docker-compose up -d
# Option 2: Local Development
make dev
make services # Start all microservices# Local CLI scanning
./bin/aisec scan examples/
# Docker-based scanning
curl -X POST "http://localhost:8004/analyze" \
-H "Content-Type: application/json" \
-d '{
"file_paths": ["examples/cross_file_vulnerability.py"],
"enable_llm": true,
"enable_reachability": true
}'
# GitHub Integration
# 1. Create GitHub App with webhook URL: https://your-domain.com/webhook
# 2. Set GITHUB_TOKEN and GITHUB_WEBHOOK_SECRET in .env
# 3. Open a PR to trigger automatic security analysis- Go: CLI and orchestration services
- Python: AI analysis engine with LangGraph
- Tree-sitter: Multi-language AST parsing
- Postgres + pgvector: Knowledge graph storage
- CLI Tool (
cmd/aisec/): Command-line interface for scanning and authentication - Semgrep Service (
services/semgrep_service.py): Deterministic security rules with AI-generated patterns - Knowledge Graph (
services/knowledge_graph.py): Postgres + pgvector for code relationships - LLM Analysis (
services/real_llm_analysis.py): Claude 3.5 Sonnet with self-critique - Hybrid Analysis (
services/hybrid_analysis.py): Signal fusion and intelligent triage - GitHub Integration (
services/github_integration.py): PR comments and check runs - Parser Service (
services/tree_sitter_service.py): AST extraction for context
- CLI Skeleton:
aisec auth login,aisec scan . - Tree-sitter Parser: Basic Python parsing with function/import extraction
- Context Extractor: Identifies functions, imports, and API routes
- Mock Security Analysis: Detects common vulnerability patterns
- Configuration System: YAML-based policy configuration
- Semgrep Integration: Custom AI-generated code rules with 5 specialized patterns
- Knowledge Graph: Postgres + pgvector for code relationship tracking
- Reachability Engine: Cross-file vulnerability analysis
- Real LLM Integration: Claude 3.5 Sonnet with structured outputs
- LangGraph Self-Critique: AI validation loop for improved accuracy
- Hybrid Analysis: Signal deduplication and intelligent triage
- GitHub Integration: PR comments, check runs, and suggested changes
- Docker Compose: Full microservices deployment
- Advanced Risk Scoring: Business-aware risk formula with sensitivity analysis
- Policy-as-Code Engine: YAML-based automated triage decisions
- Real Tree-sitter: Production AST parsing with symbol resolution
- Sensitive Data Detection: Automatic identification of restricted data patterns
- Exploitability Simulation: LLM-generated exploit commands for critical findings
- Slack/Teams Integration: Human-in-the-loop triage for uncertain findings
- CLI Explain Command: Interactive teacher-mode explanations for developers
- Code Snippet Optimization: Context-aware LLM prompt optimization
- Fix Generation: AI-powered vulnerability remediation with code patches
- VS Code Extension: Real-time security linting and suggestions
- Enterprise Features: SSO, team management, advanced reporting
- Multi-language Support: Java, Go, Rust, JavaScript frameworks
- Advanced Analytics: Security metrics, trends, and compliance reporting
- Custom Rule Builder: Visual interface for creating security rules
The platform focuses on AI-specific security issues:
- IDOR Vulnerabilities: Missing authorization in destructive operations
- Injection Flaws: SQL/command injection in LLM-generated code
- Insecure AI Agents: Dangerous tools without proper safeguards
- Prompt Injection: Missing input validation in AI systems
- Hallucinated APIs: Non-existent libraries and methods
# ❌ Vulnerable: No authorization check
@app.delete("/users/{user_id}")
def delete_user(user_id: int):
query = f"DELETE FROM users WHERE id = {user_id}"
db.execute(query) # IDOR vulnerability$ curl -X POST "http://localhost:8004/analyze" \
-H "Content-Type: application/json" \
-d '{
"file_paths": ["examples/sensitive_payment_data.py"],
"enable_llm": true,
"enable_reachability": true
}'
🚨 Found 1 security issue:
1. [critical] Payment data exposure without authorization
📁 sensitive_payment_data.py:8
💡 Accessing restricted payment data via public API without proper controls
🔧 BLOCK: This violates policy "Protect Restricted Data"
📊 Risk Score: 92.5/100 | Policy: BLOCK | Sources: [semgrep, llm, risk_scoring, policy_engine]
💬 Exploit: curl -X GET 'http://localhost:8000/admin/payments'Create an aisec.yaml file to customize scanning:
# AISec Configuration File - Week 3 Policy-as-Code
# This file defines policies for security scanning and automated decisions
# Policy-as-Code rules
policies:
# Protect restricted data - BLOCK any access without proper controls
- name: "Protect Restricted Data"
description: "Block any access to restricted/sensitive data without proper controls"
if: "data_sensitivity == restricted && reachability > 0.5"
then: "BLOCK"
priority: 10
enabled: true
# Payment data protection
- name: "Payment Data Protection"
description: "Block any issues with payment/financial data"
if: "sensitive_patterns contains payment && risk_level in [medium, high, critical]"
then: "BLOCK"
priority: 15
enabled: true
# Risk Scoring settings
risk_scoring:
# Sensitivity multipliers
sensitivity_multipliers:
restricted: 2.0
confidential: 1.5
internal: 1.0
public: 0.5
# Human-in-the-loop settings
human_in_the_loop:
enabled: true
triage_conditions:
- "risk_level == high && confidence < 0.7"
- "sensitive_patterns contains payment && confidence < 0.9"# Run Go tests
go test ./...
# Run Python tests
python -m pytest tests/
# Run all tests
make test# Start all services with Docker Compose
docker-compose up -d
# Start individual services locally
make run-semgrep # Port 8002
make run-kg # Port 8003
make run-llm # Port 8005
make run-hybrid # Port 8004
make run-risk # Port 8007
make run-policy # Port 8008
make run-ast # Port 8009
make run-slack # Port 8010
# Test Week 3 features
make test-risk # Test advanced risk scoring
make test-policy # Test policy engine
make test-ast # Test AST symbol resolution
make test-slack # Test Slack triage
# Week 3 demo
make demo-week3
# CLI explain command
./bin/aisec explain idor-001 --teacher --verboseSEC-OSS/
├── cmd/aisec/ # CLI application
├── internal/ # Internal packages
│ ├── auth/ # Authentication logic
│ ├── cli/ # CLI commands (including explain)
│ ├── scanner/ # Core scanning engine
│ └── parser/ # AST parsing
├── pkg/ # Public packages
│ ├── ast/ # AST utilities
│ └── config/ # Configuration
├── services/ # Python microservices
│ ├── semgrep_service.py # Deterministic scanning
│ ├── knowledge_graph.py # Code relationships
│ ├── real_llm_analysis.py # Claude 3.5 Sonnet
│ ├── hybrid_analysis.py # Signal fusion (updated)
│ ├── risk_scoring.py # Business-aware risk scoring
│ ├── policy_engine.py # Policy-as-Code engine
│ ├── real_ast_parser.py # Production AST parsing
│ ├── slack_integration.py # Human-in-the-loop triage
│ ├── github_integration.py # PR automation
│ └── tree_sitter_service.py # AST extraction
├── examples/ # Example vulnerable code
│ ├── cross_file_vulnerability.py
│ ├── database_utils.py
│ ├── vulnerable_code.py
│ └── sensitive_payment_data.py # Week 3 demo
├── scripts/ # Database initialization
├── docker-compose.yml # Full stack deployment (updated)
├── .env.example # Environment template
└── aisec.yaml # Configuration file (updated)
| Metric | Target (v1) | Why it matters |
|---|---|---|
| MTTR | < 1 hour | Measures if fixes are being used |
| False Positive Rate | < 5% | Developer adoption |
| "Vibe" Accuracy | > 80% | Explanation quality |
| Scan Latency | < 30s | Developer workflow |
- Language: Go (CLI/Orchestrator) + Python (AI/Analysis)
- LLM: Claude 3.5 Sonnet (Analysis) + GPT-4o-mini (Classification)
- Database: Supabase (Postgres + Vector + Auth)
- Parsing: Tree-sitter (Multi-language AST)
- Graph: Postgres (Apache Age) / Neo4j
- Fork the repository
- Create a feature branch
- Make your changes
- Add tests
- Submit a pull request
MIT License - see LICENSE file for details.
Week 4 Focus: Deploy AI-powered fix generation and VS Code extension for real-time security feedback.
Week 3 Achievement: ✅ "The Smart Guardrail" - Successfully implemented the Friday goal:
- ✅ Advanced risk scoring with business impact analysis
- ✅ Policy-as-Code engine with automated triage decisions
- ✅ Real AST parsing with symbol resolution
- ✅ Sensitive data detection and exploitability simulation
- ✅ Human-in-the-loop triage via Slack/Teams integration
- ✅ Interactive CLI explain command for developer education
Platform Status: 🧠 Intelligent Security Brain - The system now makes autonomous triage decisions, understands business impact, and provides intelligent guardrails that know when to block and when to stay silent.