Skip to content

Multi-runtime personal AI assistant with container isolation

License

Notifications You must be signed in to change notification settings

mistakeknot/Intercom

Folders and files

NameName
Last commit message
Last commit date

Latest commit

 

History

251 Commits
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 

Repository files navigation

NanoClaw

A personal Claude assistant that runs securely in containers. Lightweight and built to be understood and customized for your own needs.

nanoclaw.dev  •   中文  •   Discord  •   34.9k tokens, 17% of context window

New: First AI assistant to support Agent Swarms. Spin up teams of agents that collaborate in your chat.

Why NanoClaw exists

OpenClaw is an impressive project with a great vision. But running software you don't understand: with access to your life: is a hard sell. OpenClaw has 52+ modules, 8 config management files, 45+ dependencies, and abstractions for 15 channel providers. Security is application-level (allowlists, pairing codes) rather than OS isolation. Everything runs in one Node process with shared memory.

NanoClaw gives you the same core functionality in a codebase you can understand in 8 minutes. One process. A handful of files. Agents run in actual Linux containers with filesystem isolation, not behind permission checks.

Quick start

git clone https://github.com/qwibitai/nanoclaw.git
cd nanoclaw
claude

Then run /setup. Claude Code handles everything: dependencies, authentication, container setup, service configuration.

Philosophy

Small enough to understand. One process, a few source files. No microservices, no message queues, no abstraction layers. Have Claude Code walk you through it.

Secure by isolation. Agents run in Linux containers (Apple Container on macOS, or Docker). They can only see what's explicitly mounted. Bash access is safe because commands run inside the container, not on your host.

Built for one user. This isn't a framework. It's working software optimized for a single person's needs. Fork it and have Claude Code make it match yours.

Customization = code changes. No configuration sprawl. Want different behavior? Modify the code. The codebase is small enough that this is safe.

AI-native. No installation wizard; Claude Code guides setup. No monitoring dashboard; ask Claude what's happening. No debugging tools; describe the problem, Claude fixes it.

Skills over features. Contributors shouldn't add features (e.g. support for Telegram) to the codebase. Instead, they contribute claude code skills like /add-telegram that transform your fork. You end up with clean code that does exactly what you need.

Best harness, best model. NanoClaw runs on Claude Agent SDK, which means you're running Claude Code directly. The harness matters. A bad harness makes even smart models seem dumb, a good harness gives them superpowers. Claude Code is the best harness available.

What it supports

  • WhatsApp I/O - Message Claude from your phone
  • Isolated group context - Each group has its own CLAUDE.md memory, isolated filesystem, and runs in its own container sandbox with only that filesystem mounted
  • Main channel - Your private channel (self-chat) for admin control; every other group is completely isolated
  • Scheduled tasks - Recurring jobs that run Claude and can message you back
  • Web access - Search and fetch content
  • Container isolation - Agents sandboxed in Apple Container (macOS) or Docker (macOS/Linux)
  • Agent Swarms - Spin up teams of specialized agents that collaborate on complex tasks (first personal AI assistant to support this)
  • Optional integrations - Add Gmail (/add-gmail) and more via skills

Usage

Talk to your assistant with the trigger word (default: @Amtiskaw):

@Amtiskaw send an overview of the sales pipeline every weekday morning at 9am (has access to my Obsidian vault folder)
@Amtiskaw review the git history for the past week each Friday and update the README if there's drift
@Amtiskaw every Monday at 8am, compile news on AI developments from Hacker News and TechCrunch and message me a briefing

From the main channel (your self-chat), you can manage groups and tasks:

@Amtiskaw list all scheduled tasks across groups
@Amtiskaw pause the Monday briefing task
@Amtiskaw join the Family Chat group

Customizing

There are no configuration files to learn. Just tell Claude Code what you want:

  • "Change the trigger word to @Bob"
  • "Remember in the future to make responses shorter and more direct"
  • "Add a custom greeting when I say good morning"
  • "Store conversation summaries weekly"

Or run /customize for guided changes.

The codebase is small enough that Claude can safely modify it.

Updating

Pull the latest NanoClaw changes into your fork:

claude

Then run /update. Claude Code fetches upstream, previews changes, merges with your customizations, runs migrations, and verifies the result.

Contributing

Don't add features. Add skills.

If you want to add Telegram support, don't create a PR that adds Telegram alongside WhatsApp. Instead, contribute a skill file (.claude/skills/add-telegram/SKILL.md) that teaches Claude Code how to transform a NanoClaw installation to use Telegram.

Users then run /add-telegram on their fork and get clean code that does exactly what they need, not a bloated system trying to support every use case.

RFS (Request for skills)

Skills the project would benefit from:

Communication Channels

  • /add-slack - Add Slack

Platform Support

  • /setup-windows - Windows via WSL2 + Docker

Session Management

  • /add-clear - Add a /clear command that compacts the conversation (summarizes context while preserving critical information in the same session). Requires figuring out how to trigger compaction programmatically via the Claude Agent SDK.

Requirements

Architecture

WhatsApp (baileys) --> SQLite --> Polling loop --> Container (Claude Agent SDK) --> Response

Single Node.js process. Agents execute in isolated Linux containers with mounted directories. Per-group message queue with concurrency control. IPC via filesystem.

Key files:

  • src/index.ts - Orchestrator: state, message loop, agent invocation
  • src/channels/whatsapp.ts - WhatsApp connection, auth, send/receive
  • src/ipc.ts - IPC watcher and task processing
  • src/router.ts - Message formatting and outbound routing
  • src/group-queue.ts - Per-group queue with global concurrency limit
  • src/container-runner.ts - Spawns streaming agent containers
  • src/task-scheduler.ts - Runs scheduled tasks
  • src/db.ts - SQLite operations (messages, groups, sessions, state)
  • groups/*/CLAUDE.md - Per-group memory

IronClaw migration foundation (experimental)

Intercom now includes a Rust workspace under rust/ that bootstraps the IronClaw replatform effort while keeping the current Node runtime as default.

Available scripts:

npm run rust:check
npm run rust:build
npm run rust:build:release
npm run rust:test

The Rust daemon entrypoint is rust/intercomd. See docs/migrations/rust-foundation.md.

Intercomd handles orchestration by default (orchestrator.enabled=true in config/intercom.toml). Telegram ingress/egress routes through intercomd unconditionally with automatic Node fallback on bridge failure.

FAQ

Why WhatsApp and not Telegram/Signal/etc?

Because the author uses WhatsApp. Fork it and run a skill to change it. That's the whole point.

Why Docker?

Docker provides cross-platform support (macOS and Linux) and a mature ecosystem. On macOS, you can optionally switch to Apple Container via /convert-to-apple-container for a lighter-weight native runtime.

Can I run this on Linux?

Yes. Docker is the default runtime and works on both macOS and Linux. Just run /setup.

Is this secure?

Agents run in containers, not behind application-level permission checks. They can only access explicitly mounted directories. You should still review what you're running, but the codebase is small enough that you actually can. See docs/SECURITY.md for the full security model.

Why no configuration files?

Configuration sprawl is the enemy. Every user should customize it to so that the code matches exactly what they want rather than configuring a generic system. If you like having config files, tell Claude to add them.

How do I debug issues?

Ask Claude Code. "Why isn't the scheduler running?" "What's in the recent logs?" "Why did this message not get a response?" That's the AI-native approach.

Why isn't the setup working for me?

Hard to say without seeing the error. Run claude, then run /debug. If Claude finds an issue that is likely affecting other users, open a PR to modify the setup SKILL.md.

What changes will be accepted into the codebase?

Security fixes, bug fixes, and clear improvements to the base configuration. That's it.

Everything else (new capabilities, OS compatibility, hardware support, enhancements) should be contributed as skills.

This keeps the base system minimal and lets every user customize their installation without inheriting features they don't want.

Community

Questions? Ideas? Join the Discord.

License

MIT

About

Multi-runtime personal AI assistant with container isolation

Resources

License

Contributing

Security policy

Stars

Watchers

Forks

Packages

 
 
 

Contributors

Languages