Skip to content

amadad/mirofish

Repository files navigation

MiroFish

A swarm intelligence prediction engine. Upload documents describing any scenario, and MiroFish simulates thousands of AI agents reacting on social media to predict how events will unfold.

Live: synth.scty.org

Fork of 666ghj/MiroFish — fully translated to English, local KuzuDB graph storage, Claude/Codex CLI support added.

What it does

  1. Upload reality seeds — PDFs, markdown, or text files (news articles, policy drafts, financial reports, anything)
  2. Describe what to predict — natural language prompt (e.g., "Predict public reaction to this policy over 60 days")
  3. MiroFish builds a world — extracts entities and relationships into a knowledge graph, generates AI agent personas with distinct personalities and opinions
  4. Agents simulate social media — dual-platform simulation (Twitter + Reddit) where agents post, reply, like, argue, and follow each other
  5. Get a prediction report — AI analyzes all simulation data and produces findings. Chat with the report agent or interview individual simulated agents.

Changes from upstream

Area Upstream This fork
Language Chinese UI + prompts Full English (60+ files translated)
LLM providers Alibaba Qwen only OpenAI, Anthropic, Claude CLI, Codex CLI
Graph database Hosted graph service Local KuzuDB (embedded, free)
Entity extraction Managed extraction pipeline LLM-based extraction (uses your own model)
Auth Requires API keys Can use Claude Code or Codex CLI subscriptions (no separate API cost)

Quick start

Prerequisites

  • Node.js 18+
  • Python 3.11-3.12
  • uv (Python package manager)

Setup

cp .env.example .env
# Edit .env — pick your LLM provider (see below)
npm run setup:all
npm run dev

Docker

cp .env.example .env
docker compose up -d --build

Docker builds the Vue frontend, serves it from the Flask app, and exposes the combined app on port 5001 inside the container.

LLM providers

Set LLM_PROVIDER in .env:

Provider Config Cost
claude-cli Just set LLM_PROVIDER=claude-cli Uses your Claude Code subscription
codex-cli Just set LLM_PROVIDER=codex-cli Uses your Codex CLI subscription
openai Set LLM_API_KEY + LLM_MODEL_NAME Pay-per-token
anthropic Set LLM_API_KEY + LLM_MODEL_NAME Pay-per-token
# Example: use Codex CLI (no API key needed)
LLM_PROVIDER=codex-cli

# Example: use OpenAI API
LLM_PROVIDER=openai
LLM_API_KEY=sk-...
LLM_MODEL_NAME=gpt-4o-mini

Architecture

frontend/          Vue 3 + Vite + D3.js (graph visualization)
backend/
  app/
    api/           Thin Flask REST endpoints (graph, simulation, report)
    core/          Workbench session, session registry, resource loader, tasks
    resources/     Adapters for projects, documents, Kuzu, simulations, reports
    tools/         Composable workbench operations (ingest, build, prepare, run, report)
    services/
      graph_db.py          KuzuDB-backed knowledge graph
      entity_extractor.py  LLM-based entity/relationship extraction
      graph_builder.py     Ontology → graph pipeline
      simulation_runner.py OASIS multi-agent simulation (subprocess)
      report_agent.py      ReACT agent with tool-calling for reports
      kuzu_tools.py        Search, interview, and analysis tools
    utils/
      llm_client.py        Multi-provider LLM client (OpenAI/Anthropic/CLI)
  scripts/         OASIS simulation runner scripts (Twitter + Reddit)

Workbench session metadata is persisted under backend/uploads/workbench_sessions/, and long-running task state is persisted under backend/uploads/tasks/.

The backend is being refactored toward a pi-style shape: one workbench session core, pluggable resource adapters, composable tools, and thin API shells.

How the pipeline works

Document upload → LLM ontology extraction → Knowledge graph (KuzuDB)
    → Entity filtering → Agent persona generation (LLM)
    → OASIS dual-platform simulation (Twitter + Reddit subprocess)
    → Graph memory updates → Report generation (ReACT agent)
    → Interactive chat with report agent or individual agents

Acknowledgments

  • MiroFish by 666ghj — original project
  • OASIS by CAMEL-AI — multi-agent social simulation framework
  • KuzuDB — embedded graph database

License

AGPL-3.0 License

AGPL-3.0

About

Multi-agent AI prediction engine - digital sandbox for scenario simulation (fork of 666ghj/MiroFish)

Resources

License

Stars

Watchers

Forks

Releases

No releases published

Packages

 
 
 

Contributors

Languages