If you discover a security vulnerability, please report it responsibly:
- DO NOT open a public GitHub issue
- Email: cameroncull5@gmail.com
- Include: description, steps to reproduce, impact assessment
- Expected response time: 48 hours
- Sandboxed tools — non-coding AI tools run in isolated directories with API keys stripped
- Shell blocklist — dangerous commands and injection patterns are blocked
- Code execution limits — max 20 code runs per session
- PIN authentication — optional PIN lock with SHA-256 hashing
- Audit logging — all security events logged to
~/.codegpt/security/audit.log - Auto-update verification — SHA256 checksums verified before replacing binaries
- Dependency bounds — version ranges prevent installing major breaking versions
- Local machine compromise — if an attacker has local access, they can modify config files
- Ollama model attacks — malicious models could generate harmful outputs
- Network MITM — Ollama communication is HTTP (not HTTPS) on localhost
- Full sandbox escape — coding tools have file system access by design
- GitHub Actions pinned to commit SHAs
- Release artifacts include SHA256 checksums
- Dependencies use version upper bounds
- npm published with 2FA required
- Install scripts verify checksums when available
| Version | Supported |
|---|---|
| 1.x | Yes |
| < 1.0 | No |
Core dependencies (audited):
requests— HTTP clientrich— Terminal UIprompt-toolkit— Input handling
Optional:
textual— TUI appflask— Web apppython-telegram-bot— Telegram bot
User Input
|
v
[Input Validation] --> Shell Blocklist + Injection Pattern Check
|
v
[Command Router] --> Slash commands, AI agents, tool launchers
|
v
[Ollama API] --> HTTP to localhost:11434 (or remote server)
|
v
[Tool Sandbox] --> Isolated dirs, stripped env vars, audit logged