| Version | Supported |
|---|---|
| 0.2.x | Yes |
| < 0.2 | No |
If you discover a security vulnerability in Aura, please report it responsibly:
- Do not open a public issue.
- Email a description of the vulnerability to the maintainers via GitHub private messaging, or open a GitHub Security Advisory.
- Include steps to reproduce, affected versions, and potential impact.
We will acknowledge receipt within 48 hours and aim to provide a fix or mitigation plan within 7 days.
Aura runs inference entirely on-device. The primary security considerations are:
- Model file integrity: Downloaded models are verified with SHA256 checksums.
- Character card parsing: PNG steganography and JSON parsing handle untrusted input; vulnerabilities in these parsers are in scope.
- Local data storage: Conversation history, character cards, and preferences are stored locally on the device.
- No network after setup: After the initial model download, Aura does not make network requests during normal use.
- Vulnerabilities in upstream dependencies (Flutter, LiteRT-LM) should be reported to their respective maintainers.
- Jailbreaking or prompt injection against the local LLM is not a security vulnerability — it is expected behavior for a local, user-controlled model.