Local-first multi-agent simulation and prediction engine.
This project is a derivative work of:
- Upstream:
https://github.com/666ghj/MiroFish.git - Target repository:
https://github.com/oswarld/mirollama
This repository is tuned for the easiest onboarding path:
- Run a local Ollama model
- Clone the repo
- Install dependencies
- Start both services
No paid API key is required for the default local setup.
- Frontend UI (Vite + Vue) for graph build, simulation, report, and interaction
- Flask backend APIs for simulation workflows
- Local-first LLM execution through Ollama's OpenAI-compatible endpoint
- Optional search providers (
none,searxng,zep)
- Frontend:
frontend(Vite dev server, default port3000) - Backend:
backend(Flask API, default port5001) - Root scripts: install and run frontend/backend together
- Shared env file: root
.env(loaded by backend)
Install these first:
- Node.js
>=18 - Python
>=3.11 uv(Python package/dependency runner)- Ollama (running locally)
Quick checks:
node -v
python --version
uv --version
ollama --versiongit clone https://github.com/oswarld/mirollama.git mirollama
cd mirollamaUse one model that exists in .env.example:
gpt-oss:120bgpt-oss:20bgemma4:31bgemma4:26b
Example:
ollama pull gpt-oss:20bOllama endpoint expected by default:
http://localhost:11434/v1
Copy defaults:
cp .env.example .envDefault mode is fully local/offline-friendly:
LLM_BASE_URL=http://localhost:11434/v1SEARCH_PROVIDER=noneLLM_API_KEYcan stay unset for local Ollama
Only change LLM_MODEL_NAME if you pulled a different model tag.
One command:
npm run setup:allEquivalent step-by-step:
npm run setup
npm run setup:backendnpm run devServices:
- Frontend:
http://localhost:3000 - Backend API:
http://localhost:5001 - Health check:
http://localhost:5001/health
Set in .env:
SEARCH_PROVIDER=searxng
SEARXNG_BASE_URL=http://localhost:8080
WEB_SEARCH_LANGUAGE=ko-KR
WEB_SEARCH_LIMIT=10Set in .env:
SEARCH_PROVIDER=zep
ZEP_API_KEY=your_zep_api_key_hereZEP_API_KEY is required only when SEARCH_PROVIDER=zep.
npm run backend
npm run frontendThe repository includes docker-compose.yml:
cp .env.example .env
docker compose up -dPublished ports:
3000(frontend)5001(backend)
Install uv, then rerun:
npm run setup:backendYou are likely using a non-local LLM endpoint.
- For Ollama: keep
LLM_BASE_URLashttp://localhost:11434/v1 - For cloud endpoint: set
LLM_API_KEYin.env
Usually model tag mismatch.
- Check your pulled models:
ollama list - Ensure
.envLLM_MODEL_NAMEmatches exactly
Free ports 3000 / 5001, or override:
- Backend:
FLASK_PORT=<new_port> - Frontend API target:
VITE_API_BASE_URL=http://localhost:<backend_port>
- Frontend: Vue 3, Vite, Vue Router, Vue i18n, Axios, D3
- Backend: Flask, OpenAI SDK-compatible client, CAMEL/OASIS dependencies
- Runtime model provider: Ollama (default), or any OpenAI-compatible API
- This project is licensed under the MIT License (see
LICENSE). - This repository is a derivative work based on
666ghj/MiroFish. - Upstream repository:
https://github.com/666ghj/MiroFish.git - Current repository:
https://github.com/oswarld/mirollama - Derivative notices and attribution details:
NOTICE
- Keep root
.envas the single source for runtime config - Preserve local-first defaults unless explicitly changing product direction
- If you change setup scripts or env keys, update this README in the same PR