If you discover a security vulnerability in meGPT, please report it responsibly:
DO NOT open a public GitHub issue.
Instead, email: [Create an email or use GitHub Security Advisories]
We'll respond as quickly as possible and work with you to address the issue.
meGPT is a development/experimentation tool meant to run locally. Security considerations:
- Gateway API vulnerabilities
- Prompt injection attacks on the gateway
- Container security issues
- Dependency vulnerabilities
- Code execution risks
- Issues with upstream projects (Ollama, Open WebUI)
- General LLM safety (though we provide red-team tools)
- Physical security of the host machine
- Network security of your local environment
When using meGPT:
-
Change Default Secrets
- Update
WEBUI_SECRET_KEYin.env - Don't use default values in production
- Update
-
Network Isolation
- Don't expose ports to the internet without authentication
- Use firewalls and VPNs if remote access needed
- Consider running in an isolated network
-
Model Safety
- Use red-team tools to test model safety
- Be aware of prompt injection risks
- Review model outputs before sharing
-
Keep Updated
- Regularly update Docker images
- Pull latest versions of models
- Update Python dependencies
-
Monitor Access
- Review logs for suspicious activity
- Limit who can access the services
- Use authentication in Open WebUI
- Models can be prompted to generate harmful content
- Gateway doesn't include authentication by default
- No rate limiting on API endpoints
- Model outputs should not be blindly trusted
We follow responsible disclosure practices:
- You report the issue privately
- We acknowledge within 48 hours
- We work on a fix
- We release the fix
- We credit you (if desired)
- We publicly disclose after fix is available
Security fixes will be:
- Released as soon as possible
- Documented in release notes
- Communicated via GitHub releases
Thank you for helping keep meGPT secure!