Languages: English | 简体中文 | हिन्दी | Español | العربية | Français | বাংলা | Português | Русский | Bahasa Indonesia
Self-host MiroFish, run the seed-to-simulation workflow end to end, and integrate it through the Web UI and HTTP API.
Last verified against the official MiroFish GitHub repository and mirofish.ai on April 16, 2026.
This folder helps you understand how to run, evaluate, and integrate MiroFish from a developer's point of view.
It focuses on:
- how to run MiroFish locally with the fewest wrong turns,
- how the seed -> graph -> simulation -> report pipeline actually works,
- which files, env vars, ports, and APIs matter on day one,
- what tends to break in self-hosted setups.
Scope boundary:
- This folder contains documentation only.
- It does not contain the upstream MiroFish source code.
- If anything here conflicts with the official repo code, trust the official repo.
| File | Best for |
|---|---|
| guide/installation.md | Clean source install and Docker install. |
| guide/quickstart.md | The fastest path to your first useful simulation and report. |
| guide/configuration.md | Required env vars, defaults, ports, storage paths, and production-sensitive settings. |
| guide/deployment.md | Choosing source vs Docker and hardening a self-hosted deployment. |
| guide/troubleshooting.md | First-day failures and the shortest fixes. |
| features/workflow.md | The end-to-end pipeline and which artifact each step produces. |
| reference/http-api.md | The backend routes ordinary developers actually need. |
| reference/api-recipes.md | Copy-paste curl workflows for driving MiroFish without the UI. |
| reference/config.md | Quick-reference table for runtime settings. |
| developer/architecture.md | How the Vue frontend, Flask backend, and simulation services fit together. |
MiroFish is not a terminal-first coding agent. It is a web application plus HTTP API for multi-agent scenario simulation.
For ordinary developers, the important mental model is:
- upload or paste seed material,
- describe the scenario you want to predict,
- let MiroFish generate an ontology and graph,
- prepare a social simulation with generated agent profiles,
- run the simulation,
- generate a structured report,
- interrogate the report agent or simulated agents.
Use MiroFish when you need:
- reaction modeling across multiple personas or factions,
- narrative and behavioral analysis instead of one-shot chatbot output,
- a repeatable seed -> simulation -> report workflow,
- self-hosted control over model provider and runtime.
If you are evaluating MiroFish as part of a larger agent stack, these three product pages are worth opening because they solve adjacent problems that people often confuse with simulation systems:
| Product | Link | Why a MiroFish reader should care |
|---|---|---|
| OpenClaw Launch | https://www.aigeamy.com/ | OpenClaw Launch - One-click OpenClaw Deployment. Useful when you want a fast path to a messaging-first assistant runtime instead of a seed -> graph -> simulation workflow. |
| Hermes Agent | https://hermesagent.studio/ | Not a Chat. A Persistent Agent That Remembers, Acts, and Improves. Useful when you want a long-lived agent with memory, tools, and ongoing state rather than scenario modeling. |
| Multica | https://www.multica.uk/ | Multica - Multi-Agent Collaboration for Real Workflows. Useful when your main problem is coordinating coding agents and issue-driven delivery, not generating simulation reports. |
Clone the upstream project first:
git clone https://github.com/666ghj/MiroFish.git
cd MiroFish
cp .env.example .envPowerShell note:
replace cp with Copy-Item if you prefer native syntax.
Edit .env and fill at least:
LLM_API_KEY=...
LLM_BASE_URL=https://dashscope.aliyuncs.com/compatible-mode/v1
LLM_MODEL_NAME=qwen-plus
ZEP_API_KEY=...Install and start:
npm run setup:all
npm run devThen verify:
- frontend: http://localhost:3000
- backend: http://localhost:5001/health
Quick backend smoke test:
curl http://localhost:5001/healthIf the backend does not start, the most common cause is a missing LLM_API_KEY or ZEP_API_KEY.
| Step | What you do | What you get back |
|---|---|---|
| 1. Ontology generation | Upload pdf / md / txt seed files or paste text, then describe the prediction requirement. |
project_id, extracted text, ontology draft, seed metadata |
| 2. Graph build | Confirm the ontology and build the graph. | task_id, then graph_id and graph data |
| 3. Simulation creation | Choose the project and enable Twitter-like and/or Reddit-like simulation. | simulation_id |
| 4. Preparation | Generate profiles and simulation config. | profiles, simulation_config.json, runnable scripts, ready state |
| 5. Run | Start the simulation with a conservative round count first. | run status, timeline, actions, posts/comments, agent stats |
| 6. Report | Generate a structured markdown report. | report_id, sections, logs, downloadable markdown |
| 7. Deep interaction | Ask the report agent follow-up questions or interview simulated agents. | richer explanations, counterfactuals, persona-level answers |
- Node.js
18+ - Python
3.11to3.12 uvinstalled and usable inPATHLLM_API_KEYconfiguredZEP_API_KEYconfigured- ports
3000and5001free - enough budget and patience for your model choice
Useful practical tip: the official README recommends starting with simulations under 40 rounds until you understand cost and runtime behavior.
- There is no separate end-user CLI today; the main entrypoint is the web UI plus the Flask HTTP API.
- The backend loads
.envfrom the project root, not frombackend/. - Frontend requests default to
http://localhost:5001unless you overrideVITE_API_BASE_URL. - Uploads are limited to
50 MBand the allowed extensions arepdf,md,txt, andmarkdown. - Project data and simulation artifacts persist under
backend/uploads/. - Async task status is tracked in memory, so restarting the backend during long graph/report jobs can lose live progress state even if some artifacts were already written.
- The default Flask config is development-friendly, not internet-safe: debug defaults to on, CORS is open for
/api/*, and the default secret key is not production grade.
| Resource | Link |
|---|---|
| Official site | https://mirofish.ai/ |
| GitHub repository | https://github.com/666ghj/MiroFish |
| Official English README | https://github.com/666ghj/MiroFish/blob/main/README.md |
| Official Chinese README | https://github.com/666ghj/MiroFish/blob/main/README-ZH.md |
| Live demo | https://666ghj.github.io/mirofish-demo/ |
| License | https://github.com/666ghj/MiroFish/blob/main/LICENSE |