| Version | Supported |
|---|---|
| 0.1.x | ✅ |
We take security seriously. If you discover a security vulnerability, please follow these steps:
- Open a public GitHub issue for security vulnerabilities
- Discuss the vulnerability in public forums or social media
- Email the maintainers with details about the vulnerability
- Provide a clear description of the issue and its potential impact
- Include steps to reproduce if possible
- Wait for confirmation before publicly disclosing
- Type of vulnerability
- Affected framework(s) or component(s)
- Steps to reproduce
- Potential impact
- Suggested fix (if you have one)
- Initial Response: Within 48 hours
- Status Update: Within 7 days
- Fix Timeline: Depends on severity
- Critical: Immediate (hours to days)
- High: Within 1 week
- Medium: Within 2 weeks
- Low: Next release cycle
- Never commit API keys to the repository
- Use environment variables for all API keys
- See .env.example for proper configuration
- API keys are never logged or stored by this library
- This library passes user input to LLMs
- Always validate and sanitize user input before processing
- Framework implementations include basic validation
- For production use, implement additional security layers
- LLM providers may log prompts and responses
- Never send sensitive data to LLM APIs
- Review your LLM provider's data retention policies
- Consider using self-hosted models for sensitive use cases
- Dependabot automatically checks for vulnerable dependencies
- We aim to keep all dependencies up to date
- Review dependency updates before merging
- Keep your API keys secure
- Validate user input before passing to frameworks
- Review LLM outputs before acting on them
- Use appropriate rate limiting in production
- Monitor API usage and costs
- Never commit secrets or API keys
- Use TypeScript strict mode
- Validate all user inputs
- Write tests for security-critical code
- Follow the principle of least privilege
- LLMs can be manipulated through prompt injection
- Outputs should not be blindly trusted
- This library does not prevent all prompt injection attacks
- Users must implement application-level security
- Adversarial frameworks (courtroom, red-blue-team): May generate offensive arguments
- Consensus frameworks (jury, parliament): May amplify biases in training data
- Pre-mortem/post-mortem: May surface sensitive failure scenarios
Always review outputs before acting on them.
If you discover a security issue and would like to be credited, we will acknowledge your contribution in the fix release notes (with your permission).
This security policy may be updated over time. Check back periodically for changes.
Last updated: 2026-01-30