Professional AI-Driven Bioinformatics Orchestration for OpenClaw
ClawOmics transforms your OpenClaw instance into a bioinformatics agent framework. By combining a master orchestrator with a library of specialized scientific skills, it turns raw biological data into a confirmable and executable analysis workflow.
If you only care about the easiest way to run ClawOmics, use this:
cd /Users/zhangyifan/clawomics
npm install
npm link
clawomics startAfter clawomics start, keep that process running and do the rest in your MCP-enabled chat client.
If your MCP client already knows how to spawn clawomics-mcp-server, you usually do not need clawomics start at all. In production, the host should auto-launch the server on demand.
The intended end-user flow is:
- start ClawOmics once
- open OpenClaw / Codex / another MCP-capable client
- say where the data are
- say "确认执行" when you want it to proceed
- 🧠 Automatic Planning: ClawOmics profiles your dataset (FASTQ, H5AD, BAM, VCF) and generates a structured first-pass analysis plan.
- 🧩 Mixed Dataset Triage: Mixed input folders are partitioned into analysis units so raw reads, VCFs, and processed tables can be handled separately.
- 🧪 Assay Routing: Raw sequencing inputs now produce assay candidates such as
bulk-rnaseqordna-seq, with follow-up questions when confidence is low. - 🧭 Agent Framework: Outputs include explicit agent state, confirmation gates, and persistent run artifacts that OpenClaw can use across turns.
- 🛠️ Batteries Included: Pre-integrated with 200+ skills including Scanpy, DeepTools, Biopython, and database connectors for Ensembl, ClinVar, and AlphaFold.
- 📦 Seamless Environment Control: Automated
CondaandMambamanagement to ensure reproducible, version-stable scientific workflows. - 📖 AI-Driven Narrative: Technical results are translated into biological insights, providing context-aware summaries of complex multi-omics data.
v1.2
- 🧵 Context Isolation: chat hosts can now pass
context_keyso Feishu, Telegram, and other concurrent threads do not share the same remembered bridge state - 🧹 Context Cleanup: new
clear-contextCLI andclawomics_clear_contextMCP tool to drop stale lightweight bridge state safely - 🔧 CLI Interface: New
clawomics.mjsCLI for one-command operations - 🧰 Global Command Entry:
npm linknow exposes a reusableclawomicscommand - 🪄 One-Command Startup:
clawomics startchecks MCP readiness and starts the chat bridge - 🔌 MCP Server: New local MCP server for chat-first integration with OpenClaw and other MCP-capable clients
- 🗂️ Dataset Profiling:
bio-expertnow emits structured dataset profiles for OpenClaw - 🧭 Auto Planning: New
plancommand builds first-pass workflows from detected evidence - 🪓 Dataset Partitioning: New
partitioncommand separates mixed directories into analysis units - 💾 JSON Artifacts:
profile,partition, andplancan now be written to disk for downstream automation ▶️ Confirmed Run Bootstrap:runcreates a tracked workspace with a manifest and step scripts after the user confirms execution- 🧪 Demo Data Generator:
generate_demo_data.mjscreates test datasets instantly - 🧠 Working Orchestrator:
bio-expert/scripts/orchestrator.mjsprofiles datasets and drafts workflow plans - 📊 Resource Summary: Auto-generated skill statistics table in
RESOURCES.md - 📖 Cookbook: New
docs/COOKBOOK.mdwith prompt templates
ClawOmics now operates as a chat-first workflow layer with three surfaces:
- chat clients call the MCP bridge
- the bridge calls the
bio-expertorchestrator - the orchestrator emits durable artifacts and run workspaces
This is the intended boundary:
bio-expertskill: domain policy and workflow semanticsorchestrator: real profiling, planning, session, and run logicMCP bridge: thin transport layer for chat clients
If you only need a local OpenClaw skill, you can think mostly in terms of bio-expert.
If you need Feishu, Telegram, or other chat channels to auto-route and resume state, the MCP bridge becomes necessary.
graph TD
User["User"] --> Client["OpenClaw / Codex / Gemini Host"]
Client --> MCP["clawomics-mcp-server"]
MCP --> Turn["clawomics_agent_turn"]
Turn --> Orchestrator["bio-expert orchestrator"]
Orchestrator --> Profile["dataset_profile.json"]
Orchestrator --> Partitions["dataset_partitions.json"]
Orchestrator --> Plan["analysis_plan.json"]
Orchestrator --> Session["agent_session.json"]
Orchestrator --> Bridge[".clawomics/openclaw_context.json or .clawomics/contexts/<context-key>.json"]
Orchestrator --> Skills["skills/ registry"]
Skills --> S1["scanpy / scvi-tools"]
Skills --> S2["deeptools / pysam"]
Skills --> S3["database connectors"]
Orchestrator --> Run["clawomics_runs/<run-id>/"]
Run --> Manifest["run_manifest.json"]
Run --> Commands["commands/*.sh"]
classDef client fill:#f7f1e3,stroke:#8c6d1f,color:#222;
classDef core fill:#e6f4ea,stroke:#2f855a,color:#222;
classDef artifact fill:#e8f0fe,stroke:#356ac3,color:#222;
classDef run fill:#fce8e6,stroke:#c53929,color:#222;
class Client,MCP,Turn client;
class Orchestrator,Skills,S1,S2,S3 core;
class Profile,Partitions,Plan,Session,Bridge artifact;
class Run,Manifest,Commands run;
sequenceDiagram
participant U as User
participant C as Chat Client
participant M as MCP Server
participant O as bio-expert
U->>C: "/data/project1 里有数据,帮我分析"
C->>M: clawomics_agent_turn(message)
M->>O: handleAgentMessage()
O-->>M: profile + plan + confirmation prompt
M-->>C: assistantReply
C-->>U: 展示计划并请求确认
U->>C: "确认执行"
C->>M: clawomics_agent_turn(message)
M->>O: resume latest bridge state
O-->>M: run workspace + manifest
M-->>C: assistantReply + run paths
C-->>U: 告知已创建运行目录
Clone ClawOmics into your OpenClaw workspace skills directory:
cd ~/.openclaw/workspace/skills
git clone https://github.com/yf8578/clawomics.gitFor normal use, you usually only need:
cd /Users/zhangyifan/clawomics
npm install
npm link
clawomics startThen switch to your chat client and talk to it directly.
Advanced CLI and Debug Setup
If you want the lower-level commands for debugging or local testing:
cd clawomics
chmod +x scripts/*.mjs scripts/*.sh
# Initialize environment
node scripts/clawomics.mjs setup
# Simplest daily entrypoint: start the chat bridge once
node scripts/clawomics.mjs start
# Generate demo data for testing
node scripts/clawomics.mjs demo
# Natural-language entrypoint
node scripts/clawomics.mjs agent "demo_data 里有数据,帮我分析一下"
# OpenClaw-friendly compact payload
node scripts/clawomics.mjs agent "demo_data 里有数据,帮我分析一下" --compact
# Confirm and create a run workspace
node scripts/clawomics.mjs agent "确认执行"
# MCP helper commands
node scripts/clawomics.mjs mcp-doctor
node scripts/clawomics.mjs mcp-config
node scripts/clawomics.mjs mcp
# Build profile + partitions + plan in one step
node scripts/clawomics.mjs analyze demo_data --write
# Build a structured dataset profile
node scripts/clawomics.mjs profile demo_data --write
# Generate an automatic analysis plan
node scripts/clawomics.mjs plan demo_data --write
# Split mixed inputs into analysis units
node scripts/clawomics.mjs partition demo_data --write
# After the user confirms, bootstrap a runnable workspace
node scripts/clawomics.mjs run demo_data --approveUpdate the skill inventory to register all 200+ skills:
node scripts/inventory_skills.mjsThis generates docs/RESOURCES.md with a summary table of all available tools.
Refer to our 📖 Cookbook for detailed prompt examples and scenarios.
User: "./data 里有一批测序数据,帮我看看该怎么分析。"
ClawOmics: "I detected a mixed directory containing FASTQ, VCF, and tabular outputs. I split this into raw-sequencing and variant-analysis units. For the FASTQ unit, assay routing is still low-confidence, so I recommend confirming whether the reads are DNA-seq or RNA-seq before alignment."
When you add --write, ClawOmics writes machine-readable artifacts next to the input dataset:
analysis_bundle.jsondataset_profile.jsondataset_partitions.jsonanalysis_plan.jsonagent_session.json
After run --approve, ClawOmics also creates a run workspace:
clawomics_runs/<run-id>/run_manifest.jsonclawomics_runs/<run-id>/commands/*.sh
ClawOmics is designed to stay simple inside OpenClaw:
- OpenClaw provides the model layer for planning and explanation.
- ClawOmics provides the dataset profiler and workflow scaffolding.
- No separate LLM configuration is required for the first-pass planning flow in this repository.
- The recommended production integration is MCP, so users only interact through the chat box.
For manual local testing, the intended operator flow is:
clawomics startAfter that, the rest should happen inside the chat client rather than through more ClawOmics commands.
For a real OpenClaw / Feishu / Telegram deployment, prefer letting the MCP host auto-spawn the server instead of keeping a separate terminal alive.
- User tells OpenClaw where the data live.
- OpenClaw calls
agent "<user-message>"oragent "<user-message>" --compact. - ClawOmics returns profile, partitions, and a first-pass plan.
- User confirms execution.
- OpenClaw calls
agent "确认执行"and ClawOmics resumes from the persisted bridge state automatically. - ClawOmics creates a tracked run workspace and command templates.
agent_session.json is the durable per-dataset state. The lightweight conversation bridge is stored in .clawomics/openclaw_context.json by default, or .clawomics/contexts/<context-key>.json when the host passes a stable chat-specific context_key.
- docs/PRODUCT_FRAMEWORK.md: product positioning, scope, boundaries, and principles
- docs/AGENT_PROTOCOL.md: state machine, OpenClaw flow, and artifact contract
- docs/OPENCLAW_SYSTEM_PROMPT.md: ready-to-use system prompt template for the OpenClaw conversation layer
- docs/OPENCLAW_MCP_SETUP.md: MCP-based integration guide for the cleanest chat-first experience
- docs/CHAT_CHANNEL_ROUTING.md: automatic routing rules for Feishu, Telegram, and other mixed-purpose chat channels
skills/bio-expert: The core orchestration logic.skills/: Library of 200+ integrated scientific skills.docs/RESOURCES.md: Full inventory of available tools and categories.docs/PRODUCT_FRAMEWORK.md: Product definition for the ClawOmics agent.docs/AGENT_PROTOCOL.md: Runtime contract between OpenClaw and ClawOmics.docs/OPENCLAW_INTEGRATION.md: Practical routing guide for OpenClaw conversation logic.docs/OPENCLAW_SYSTEM_PROMPT.md: Prompt template for the OpenClaw integration layer.docs/OPENCLAW_MCP_SETUP.md: MCP server setup for direct OpenClaw tool integration.docs/CHAT_CHANNEL_ROUTING.md: Auto-routing strategy for Feishu, Telegram, and similar chat channels.docs/INTEGRATION_PLAN.md: Future capability expansion roadmap.
![]() yf8578 |
![]() puppy-0000 |
ClawOmics stands on the shoulders of giants. We gratefully acknowledge:
- Claude Scientific Skills by K-Dense-AI (170+ core research skills).
- BioClaw by Runchuan-BU (Specialized bio-logic and inspirations).
- The OpenClaw Community for the underlying agent gateway infrastructure.
Distributed under the MIT License. See LICENSE for details.
Built with 🧬 by yf8578


