Only the main branch is supported. There are no version releases yet.
Email the maintainer via the GitHub profile, or use GitHub's private vulnerability reporting. Please do not open public issues for sandbox-escape or auth-related concerns.
openforge exposes a folder on disk to an LLM. The risks:
- Path traversal via the AI's tool arguments. Mitigation:
lib/workspace.ts::safe()resolves every path againstWORKSPACE_DIRand rejects anything that escapes. Tested inlib/workspace.test.ts. If you find a bypass, this is a high-priority security issue. - Path traversal via the HTTP API.
/api/files?path=...passes the user input throughsafe()before reading or writing. Same protection. - Prompt injection through file contents. A file the AI reads can contain instructions that override the system prompt. The mitigation is the 8-iteration cap and the small, well-defined tool surface (only
list_files,read_file,write_file). - No authentication. The dev server is open. Production deployments must be behind reverse-proxy auth (basic auth at nginx, Cloudflare Access, etc.). Documented in the README.
- The contents of files the AI writes. If you ask it to write malicious code, it will. That's not a security issue, that's the user holding the gun by the trigger.
- The API key in
.env. Standard file-system permissions are the right protection.
If you find a vulnerability outside this list, please report it.