Skip to content

chemany/Mente

Folders and files

NameName
Last commit message
Last commit date

Latest commit

 

History

6,406 Commits
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 

Repository files navigation

Mente Agent

English · 中文

Mente Agent ☤

Documentation Discord License: MIT GitHub Repository

Mente is a unified AI agent for coding, automation, gateway workflows, and long-term memory. It creates skills from experience, improves them during use, nudges itself to persist knowledge, searches its own past conversations, and builds a deepening model of who you are across sessions. Run it on a $5 VPS, a GPU cluster, or serverless infrastructure that costs nearly nothing when idle. It's not tied to your laptop — talk to it from Telegram while it works on a cloud VM.

This branch also completes a product-surface consolidation pass:

  • External surface is uniformly Mente across CLI, gateway progress, messaging, and user-facing replies.
  • Internal execution still runs on the Codex-backed executor — the rename is presentation-layer cleanup, not a runtime downgrade.
  • Gateway progress is visible again with Mente-facing step names while preserving detailed command/tool activity.
  • Config/admin operations are now explicit through a dedicated Mente skill for API keys, provider auth, .env, config.yaml, and gateway restart rules.

Mente product surface with Codex-backed core and npm bootstrap installer

Use any model you want — Nous Portal, OpenRouter (200+ models), NVIDIA NIM (Nemotron), Xiaomi MiMo, z.ai/GLM, Kimi/Moonshot, MiniMax, Hugging Face, OpenAI, or your own endpoint. Switch with mente model — no code changes, no lock-in.

A real terminal interfaceFull TUI with multiline editing, slash-command autocomplete, conversation history, interrupt-and-redirect, and streaming tool output.
Lives where you doTelegram, Discord, Slack, WhatsApp, Signal, and CLI — all from a single gateway process. Voice memo transcription, cross-platform conversation continuity.
A closed learning loopAgent-curated memory with periodic nudges. Autonomous skill creation after complex tasks. Skills self-improve during use. FTS5 session search with LLM summarization for cross-session recall. Honcho dialectic user modeling. Compatible with the agentskills.io open standard.
Scheduled automationsBuilt-in cron scheduler with delivery to any platform. Daily reports, nightly backups, weekly audits — all in natural language, running unattended.
Delegates and parallelizesSpawn isolated subagents for parallel workstreams. Write Python scripts that call tools via RPC, collapsing multi-step pipelines into zero-context-cost turns.
Runs anywhere, not just your laptopSix terminal backends — local, Docker, SSH, Daytona, Singularity, and Modal. Daytona and Modal offer serverless persistence — your agent's environment hibernates when idle and wakes on demand, costing nearly nothing between sessions. Run it on a $5 VPS or a GPU cluster.
Research-readyBatch trajectory generation, Atropos RL environments, trajectory compression for training the next generation of tool-calling models.

Quick Install

Option 1: direct installer

curl -fsSL https://raw.githubusercontent.com/chemany/Mente/main/scripts/install.sh | bash

Works on Linux, macOS, WSL2, and Android via Termux. The one-click installer is release-pinned by default and can also bootstrap a matching vendored Codex runtime from local/offline assets via --runtime-artifact-manifest and --runtime-wheel.

Option 2: npm bootstrapper

npm install -g mente-agent
mente

The npm package is intentionally thin. It publishes only the launcher and installer scripts, then bootstraps the full Mente runtime on first run. By default the bootstrapper installs from the repo's main branch, and you can force a tagged release with MENTE_BOOTSTRAP_RELEASE=<tag> mente. It does not publish your local .env, auth.json, ~/.mente, ~/.hermes, sessions, logs, or other machine-local state.

At the moment, the repository is ready for npm publish but not yet live on the npm registry. Until the first public npm release is published, use Option 1 above. Once the package is published, the npm install -g mente-agent flow becomes the primary one-line install path.

Release operators can use the short npm runbook here: docs/releasing/npm.md.

Android / Termux: The tested manual path is documented in the Termux guide. On Termux, Mente installs a curated .[termux] extra because the full .[all] extra currently pulls Android-incompatible voice dependencies.

Windows: Native Windows is not supported. Please install WSL2 and run the command above.

Developers / source checkouts: use ./setup-hermes.sh after cloning the repo manually. That path is for editable development, not the frozen end-user release install.

After installation:

source ~/.bashrc    # reload shell (or: source ~/.zshrc)
mente               # start chatting!

Getting Started

mente               # Interactive CLI — start a conversation
mente model         # Choose your LLM provider and model
mente tools         # Configure which tools are enabled
mente config set    # Set individual config values
mente gateway       # Start the messaging gateway (Telegram, Discord, etc.)
mente setup         # Run the full setup wizard (configures everything at once)
mente claw migrate  # Migrate from OpenClaw (if coming from OpenClaw)
mente update        # Update to the latest version
mente doctor        # Diagnose any issues

📖 Full documentation →

What This Refresh Changes

This README tracks the current Mente packaging and runtime direction:

  • One current install command for GitHub visitors: curl -fsSL https://raw.githubusercontent.com/chemany/Mente/main/scripts/install.sh | bash
  • One intended npm install command after publish: npm install -g mente-agent
  • One visible agent identity: user-facing replies and progress now present as Mente
  • Same execution depth under the hood: complex coding and tool work still route through the Codex-backed executor
  • Safer operations surface: packaging is whitelist-based, and config/admin actions now have explicit handling for API keys, provider auth, and restart boundaries

If you are evaluating Mente from GitHub, the practical model is:

  1. Install Mente with the direct installer.
  2. Launch mente.
  3. Let the bootstrap flow finish preparing the full runtime.
  4. Use Mente normally from CLI or gateway surfaces.

CLI vs Messaging Quick Reference

Mente has two entry points: start the terminal UI with mente, or run the gateway and talk to it from Telegram, Discord, Slack, WhatsApp, Signal, or Email. Once you're in a conversation, many slash commands are shared across both interfaces.

Action CLI Messaging platforms
Start chatting mente Run mente gateway setup + mente gateway start, then send the bot a message
Start fresh conversation /new or /reset /new or /reset
Change model /model [provider:model] /model [provider:model]
Set a personality /personality [name] /personality [name]
Retry or undo the last turn /retry, /undo /retry, /undo
Compress context / check usage /compress, /usage, /insights [--days N] /compress, /usage, /insights [days]
Browse skills /skills or /<skill-name> /<skill-name>
Interrupt current work Ctrl+C or send a new message /stop or send a new message
Platform-specific status /platforms /status, /sethome

For the full command lists, see the CLI guide and the Messaging Gateway guide.


Documentation

All documentation lives at chemany.github.io/Mente/docs:

Section What's Covered
Quickstart Install → setup → first conversation in 2 minutes
CLI Usage Commands, keybindings, personalities, sessions
Configuration Config file, providers, models, all options
Messaging Gateway Telegram, Discord, Slack, WhatsApp, Signal, Home Assistant
Security Command approval, DM pairing, container isolation
Tools & Toolsets 40+ tools, toolset system, terminal backends
Skills System Procedural memory, Skills Hub, creating skills
Memory Persistent memory, user profiles, best practices
MCP Integration Connect any MCP server for extended capabilities
Cron Scheduling Scheduled tasks with platform delivery
Context Files Project context that shapes every conversation
Architecture Project structure, agent loop, key classes
Contributing Development setup, PR process, code style
CLI Reference All commands and flags
Environment Variables Complete env var reference

Migrating from OpenClaw

If you're coming from OpenClaw, Mente can automatically import your settings, memories, skills, and API keys.

During first-time setup: The setup wizard (mente setup) automatically detects ~/.openclaw and offers to migrate before configuration begins.

Anytime after install:

mente claw migrate              # Interactive migration (full preset)
mente claw migrate --dry-run    # Preview what would be migrated
mente claw migrate --preset user-data   # Migrate without secrets
mente claw migrate --overwrite  # Overwrite existing conflicts

What gets imported:

  • SOUL.md — persona file
  • Memories — MEMORY.md and USER.md entries
  • Skills — user-created skills → ~/.hermes/skills/openclaw-imports/
  • Command allowlist — approval patterns
  • Messaging settings — platform configs, allowed users, working directory
  • API keys — allowlisted secrets (Telegram, OpenRouter, OpenAI, Anthropic, ElevenLabs)
  • TTS assets — workspace audio files
  • Workspace instructions — AGENTS.md (with --workspace-target)

See mente claw migrate --help for all options, or use the openclaw-migration skill for an interactive agent-guided migration with dry-run previews.


Contributing

We welcome contributions! See the Contributing Guide for development setup, code style, and PR process.

Quick start for contributors — clone and go with setup-hermes.sh:

git clone https://github.com/chemany/Mente.git
cd Mente
./setup-hermes.sh     # installs uv, creates venv, installs .[all], symlinks ~/.local/bin/mente
./mente               # auto-detects the venv, no need to `source` first

Manual path (equivalent to the above):

curl -LsSf https://astral.sh/uv/install.sh | sh
uv venv venv --python 3.11
source venv/bin/activate
uv pip install -e ".[all,dev]"
scripts/run_tests.sh

RL Training (optional): The RL/Atropos integration (environments/) ships via the atroposlib and tinker dependencies pulled in by .[all,dev] — no submodule setup required.


Community


License

MIT — see LICENSE.

Built for the Mente project.

About

Mente is a unified AI agent for coding, automation, gateway workflows, and long-term memory.

Resources

License

Contributing

Security policy

Stars

Watchers

Forks

Packages

 
 
 

Contributors