ποΈ Multimodal reasoning tracer β see what models see, and why they decide.
Miru (θ¦γ) β to see, to observe.
Not just outputs β but perception and reasoning.
Miru is a multimodal explainability engine:
-
Input: image or document + question
-
Output:
- answer
- reasoning trace
- attention visualization
Multimodal models are black boxes:
- No visibility into reasoning
- No auditability
- No explainability
Critical issue for:
- compliance
- medical
- enterprise AI
- Vision-language models (VLMs)
- Cross-attention mechanisms
- Saliency & interpretability
- Multimodal reasoning
- π Python (FastAPI backend)
- π¨ Visualization layer (attention maps, overlays)
uvicorn miru.main:app --reloadMake AI reasoning visible.