Status (v2.0.0): All v2 roadmap delivered. Tool execution layer, browser automation, Pack & Go USB deployment, TUI, vector memory, and more. Changelog: see
CHANGELOG.md· Roadmap: seedocs/ROADMAP.md.
HASHI is a privacy-first, compliant alternative to OpenClaw designed for a more secure agentic experience. It prioritizes your security by never requiring or storing your Claude, Codex, or Gemini OAuth authentication tokens, ensuring your setup remains fully compliant with current Terms of Service.
Beyond safety, HASHI introduces practical features built for real-world workflows:
• Context Recovery: Use the /handoff command to instantly restore project context when work is lost during conversation compression. • Multi-Agent Connectivity: Connect and switch between multiple specialized agents through a single WhatsApp account.
HASHI is built to evolve. We are committed to adding the tools and functions the community needs to make AI collaboration safer and more productive.
HASHI means "bridge" in Japanese (橋).
The kanji 橋 combines:
- 木 (tree/wood) - the natural foundation
- 喬 (tall) - reaching upward, connecting heights
Project Philosophy:
「橋」は「知」を繋ぎ、「知」は未来を拓く。 The Bridge connects Intellect; Intellect opens the future.
Just as bridges connect distant shores, HASHI connects:
- Human creativity ↔ AI capabilities
- Multiple AI systems ↔ Unified interface
- Present workflows ↔ Future possibilities
HASHI was conceived and designed from scratch by Barry Li (https://barryli.phd), a PhD candidate at the University of Newcastle, Australia.
Coming from a non-technical background with no prior IT experience, Barry built this project through "Vibe-Coding" — every line of code was generated by AI (Claude, Gemini, and Codex) and cross-reviewed by AI. Barry's role was that of System Architect and Director, providing the vision, operational judgment, and iterative direction. This marks Barry's first publishable AI project.
This project would not exist without OpenClaw by Peter Steinberg and the OpenClaw contributors. OpenClaw provided both a cutting-edge AI agent framework and the inspirational ideas that shaped this system. Deep thanks to Peter and all OpenClaw contributors.
Throughout the codebase, you'll see references to bridge-u-f - this was the internal development codename used during the project's evolution from OpenClaw.
Why "bridge"? The core metaphor: HASHI connects human intent with AI intelligence, serving as a bridge between natural language requests and computational power.
Why "u-f"?
u= universal (multi-backend, multi-agent)f= flexible (adaptive, modular, extensible)
HASHI is a universal multi-agent orchestration platform that runs entirely locally. It routes user requests to AI backends (Claude CLI, Codex CLI, Gemini CLI, or OpenRouter API) through a flexible adapter system, eliminating the need to store sensitive OAuth tokens.
Core Components:
- Onboarding - Multi-language guided setup to create your first agent
- Workbench - Local web UI (React + Vite) for multi-agent conversations
- Orchestrator - Central runtime managing agents, memory, skills, and scheduling
- Transports - Connect via Telegram, WhatsApp, or Workbench
- Skills - Modular capabilities (prompts, toggles, actions) that extend agents
- Jobs - Automated scheduling (heartbeats + cron) for periodic agent tasks
What makes HASHI different:
- No Token Storage - Uses CLI backends (gemini, claude, codex) with local authentication, not stored tokens
- Multi-Agent, Single Interface - Chat with multiple specialized agents through one WhatsApp or Telegram account
- Context Recovery -
/handoffcommand instantly restores project context after compression - Tool Execution Layer - OpenRouter agents can take real local actions: run bash commands, read/write files, search the web, call external APIs, and more
- Flex/Fixed Mode Switching - Agents can switch between CLI backends and OpenRouter mid-conversation via
/backend - TUI Interface -
tui.pyprovides a split-panel terminal UI for log monitoring and agent chat without a browser - Pack & Go - Build a self-contained USB for Windows or macOS; recipients just plug in and double-click, no setup required
- Vibe-Coded - Every line written by AI, reviewed by AI, directed by human vision
- v2.0.0 — All v2 roadmap outcomes delivered. Tool execution layer (11 tools), browser automation (Playwright), Pack & Go USB deployment (Windows + macOS), TUI, vector memory,
/dreamskill,/memorycommand. SeeCHANGELOG.md.
See INSTALL.md for detailed installation instructions.
Run HASHI on any Windows or macOS machine straight from a USB drive — no Python installation, no pip install, nothing to set up on the target machine.
Windows:
# On your machine (with internet):
windows\prepare_usb.bat # builds D:\HASHI9 with embedded Python + all deps
# On any Windows PC:
HASHI9\windows\start_tui.bat # double-click to launch
macOS:
# On your Mac (with internet):
bash mac/prepare_usb.sh # builds USB with portable Python + all deps
# On any Mac:
# Double-click HASHI9/mac/start_tui.command in FinderNote (macOS): First run may trigger Gatekeeper. Right-click the
.commandfile → Open → Open anyway.
# Clone the repository
git clone https://github.com/Bazza1982/HASHI.git
cd HASHI
# Install Python dependencies
pip install -r requirements.txt
# Run onboarding (creates your first agent)
python onboarding/onboarding_main.py
# Start HASHI
./bin/bridge-u.sh # macOS / Linux
# or
bin\bridge-u.bat # Windows
# or
python main.py # Any platform- Python 3.10+ (not required for Pack & Go USB deployments)
- At least one AI backend:
- [Gemini CLI] (
gemini) - [Claude Code] (
claude) - [Codex CLI] (
codex) - Or an OpenRouter API key
- [Gemini CLI] (
- Optional: Node.js 18+ (for Workbench UI)
HASHI uses a Universal Orchestrator pattern where a single Python process manages multiple concurrent agent runtimes:
┌─────────────────────────────────────────────────────────────┐
│ Universal Orchestrator │
│ │
│ ┌────────────────┐ ┌────────────────┐ ┌────────────────┐│
│ │ Agent Runtime │ │ Agent Runtime │ │ Agent Runtime ││
│ │ (Hashiko) │ │ (Assistant) │ │ (Coder) ││
│ └────────────────┘ └────────────────┘ └────────────────┘│
│ ▲ ▲ ▲ │
│ └──────────────────┴──────────────────┘ │
│ │ │
│ ┌───────────────────────────────────────────────────────┐ │
│ │ Flexible Backend Manager │ │
│ │ ┌──────────┐ ┌──────────┐ ┌──────────┐ ┌──────────┐│ │
│ │ │ Gemini │ │ Claude │ │ Codex │ │OpenRouter││ │
│ │ │ Adapter │ │ Adapter │ │ Adapter │ │ Adapter ││ │
│ │ └──────────┘ └──────────┘ └──────────┘ └──────────┘│ │
│ └───────────────────────────────────────────────────────┘ │
│ ▲ │
│ ┌───────────────────────────────────────────────────────┐ │
│ │ Transport Layer │ │
│ │ [Telegram] [WhatsApp] [Workbench API] │ │
│ └───────────────────────────────────────────────────────┘ │
│ │
│ ┌────────────┐ ┌────────────┐ ┌────────────────────┐ │
│ │ Skill │ │ Scheduler │ │ Memory System │ │
│ │ Manager │ │ (Jobs/Cron)│ │ (Vector + Recall) │ │
│ └────────────┘ └────────────┘ └────────────────────┘ │
└─────────────────────────────────────────────────────────────┘
Key Design Principles:
- Backend Agnostic - Agents work with any supported backend; you can switch mid-conversation
- Shared Sessions - Telegram and Workbench share the same agent queues and memory
- Explicit over Automatic - Skills, jobs, and features are user-activated, never magic
- Single Instance - File-based locking prevents multiple HASHI processes from conflicting
hashi/
├── main.py # Orchestrator entry point
├── agents.json # Agent definitions (name, backend, system prompt)
├── secrets.json # API keys (OpenRouter, etc.)
├── tasks.json # Heartbeat + cron job definitions
├── onboarding/ # Multi-language guided setup
│ ├── onboarding_main.py
│ └── languages/ # 9 languages (en, ja, zh-Hans, zh-Hant, ko, de, fr, ru, ar)
├── orchestrator/ # Core orchestration logic
│ ├── agent_runtime.py # Individual agent runtime (fixed backend)
│ ├── flexible_agent_runtime.py # Flex agent (switchable backend)
│ ├── scheduler.py # Heartbeat + cron job runner
│ ├── skill_manager.py # Skills system
│ ├── bridge_memory.py # Context assembly + memory retrieval
│ ├── memory_index.py # Vector similarity search
│ ├── workbench_api.py # Workbench REST API server
│ └── api_gateway.py # External API gateway (optional)
├── adapters/ # Backend adapters
│ ├── base.py # Abstract base adapter
│ ├── gemini_cli.py
│ ├── claude_cli.py
│ ├── codex_cli.py
│ └── openrouter_api.py
├── transports/ # Communication channels
│ ├── whatsapp.py # WhatsApp transport (whatsapp-web.js)
│ └── chat_router.py # Message routing logic
├── skills/ # Skill library
│ ├── README.md
│ └── [skill_name]/
│ ├── skill.md # Skill definition
│ └── run.py # Action script (optional)
├── workbench/ # Local web UI
│ ├── server/ # Node.js API server
│ └── src/ # React frontend
├── memory/ # Agent memory files
├── state/ # Runtime state
├── logs/ # Log files
└── workspaces/ # Agent working directories
The onboarding program (onboarding/onboarding_main.py) provides a guided, multi-language setup experience:
Features:
- 9 Languages - English, Japanese, Simplified Chinese, Traditional Chinese, Korean, German, French, Russian, Arabic
- Environment Detection - Automatically detects installed CLI backends (Gemini, Claude, Codex)
- Fallback to OpenRouter - If no CLI is detected, prompts for OpenRouter API key
- Workbench Auto-Launch - Optionally opens Workbench UI after setup
Onboarding Flow:
- Language selection
- Environment audit (detect Gemini/Claude/Codex CLI)
- If no CLI found → prompt for OpenRouter API key
- Display AI Ethics & Human Well-being Statement
- Create first agent in
agents.json - Create
secrets.jsonwith API keys (if needed) - Launch Workbench
Files Created:
agents.json- First agent definitionsecrets.json- API keys (OpenRouter, Telegram, etc.).hashi_onboarding_complete- Flag file to prevent re-onboarding
The Workbench is a local web interface for multi-agent conversations:
Architecture:
- Frontend - React + Vite, runs on
http://localhost:5173 - Backend - Node.js Express server, runs on
http://localhost:3003 - Bridge API - Connects to orchestrator at
http://127.0.0.1:18800
Features:
- Multi-agent chat interface with agent switching
- Real-time transcript polling
- File and media upload support
- System status display
- Shared sessions with Telegram/WhatsApp
Start/Stop:
./workbench.bat # Start workbench (Windows)
./workbench-ctl.sh start # Start workbench (Linux)
./stop_workbench.bat # Stop workbenchHow It Works:
- Workbench frontend polls orchestrator
/api/agentsfor agent list - User sends message through Workbench → POST
/api/agents/{name}/send - Orchestrator queues message in agent runtime (same queue as Telegram)
- Backend processes message, streams response
- Workbench polls
/api/agents/{name}/transcriptfor updates
HASHI supports multiple communication channels through a transport layer:
- Default transport, enabled by default
- Requires
telegram_bot_tokeninsecrets.json - Commands:
/new,/stop,/reboot,/handoff,/skill, etc. - Supports inline keyboards, file uploads, voice messages
Setup:
- Create bot via @BotFather
- Add
telegram_bot_tokentosecrets.json - Add your Telegram user ID to agent's
authorized_idinagents.json
- Uses
whatsapp-web.jslibrary - Requires QR code scan on first launch
- Multi-Agent Support - Route messages to different agents using
/agent <name>prefix
Setup:
⚠️ Important: Do NOT runlink_whatsapp.pydirectly in a terminal and wait for it — it opens an interactive pairing session that never exits on its own when run that way. Use the relay method below.
Recommended method (agent-relay via Telegram):
- Ask your HASHI agent (e.g. Hashiko) to set up WhatsApp. She will handle it.
- The agent runs
scripts/run_whatsapp_link.shin the background, which startslink_whatsapp.pywith--qr-image-file /tmp/wa_link_qr.png --completion-file /tmp/wa_link_result.json. - When the QR PNG appears, the agent automatically sends it to you as a Telegram image.
- Scan the QR code from Telegram using your WhatsApp mobile app.
- The agent polls
/tmp/wa_link_result.jsonand confirms when linking is complete. - Session saved in
wa_session/— subsequent starts do not require a new QR scan.
Manual method (if running without an agent):
-
Open a terminal and run:
bash scripts/run_whatsapp_link.sh -
Wait a few seconds, then open
/tmp/wa_link_qr.pngin an image viewer. -
Scan the QR code with WhatsApp mobile app.
-
Check
/tmp/wa_link_result.jsonfor{"status": "linked"}to confirm success. -
Configure routing in agent's
whatsapp_routingsettings.
WhatsApp Commands:
/agent hashiko- Switch to "hashiko" agent/agents- List available agents- Normal messages → routed to current active agent
- Local web UI (see Workbench section above)
- No authentication required (localhost only)
- Shared sessions with Telegram/WhatsApp
HASHI agents respond to both natural language and structured commands:
| Command | Description |
|---|---|
/new |
Start a fresh session (clears continuity) |
/stop |
Cancel current processing |
/reboot [min|max|#] |
Hot restart agents — shows button menu when called without args |
/status [full] |
Show agent status, backend info |
/handoff |
Restore continuity from recent transcript |
/skill |
Browse and run skills (inline keyboard) |
/help |
Show available commands |
| Command | Description |
|---|---|
/mode [fixed|flex] |
Switch between fixed CLI session and flex multi-backend mode (button menu) |
/backend [engine] |
Switch backend — shows inline button picker (flex mode only) |
/model |
View/change active model (inline keyboard) |
/effort |
View/change effort level — Claude/Codex only (inline keyboard) |
/fyi [prompt] |
Refresh bridge environment awareness |
/retry |
Resend last response or re-run last prompt (button menu) |
/park [chat|delete <n>] |
Save or list parked topics |
/load <n> |
Restore a parked topic |
/sys <n> [on|off|save|output] |
Manage system prompt slots; /sys output <n> returns raw text |
/clear |
Clear media directory and reset session state |
| Command | Description |
|---|---|
/verbose [on|off] |
Toggle detailed long-task status (button menu) |
/think [on|off] |
Toggle thinking trace display (button menu) |
/active [on|off|minutes] |
Toggle proactive heartbeat (button menu) |
/whisper [small|medium|large] |
Set local voice transcription model (button menu) |
/voice [on|off|menu|use <alias>] |
Control bridge voice replies (inline keyboard) |
/credit |
Show API credit/usage (OpenRouter only) |
| Command | Description |
|---|---|
/start [all|<name>] |
Start a stopped agent — button menu or all to start all |
/terminate |
Shut down this agent gracefully |
| Command | Description |
|---|---|
/wa_on |
Start WhatsApp transport |
/wa_off |
Stop WhatsApp transport |
/wa_send <+number> <msg> |
Send a WhatsApp message |
| Command | Description |
|---|---|
/jobs |
Show cron and heartbeat jobs with action buttons |
/skill cron |
Full cron job management |
/skill heartbeat |
Full heartbeat job management |
Skills are modular capabilities that extend agent functionality. Every skill is defined by a skill.md file with frontmatter + instructions.
| Type | Behavior | Example |
|---|---|---|
| Action | One-shot execution, runs a script | restart_pc, system_status |
| Prompt | Routes user input to a backend/tool | codex, gemini, claude |
| Toggle | Injects instructions while active | TTS, carbon-accounting, academic-writing |
Each skill lives in skills/<skill_id>/:
skills/
carbon-accounting/
skill.md # Frontmatter + instructions
standards/
ghg-protocol-summary.md
iso14064-notes.md
skill.md Example:
---
id: carbon-accounting
name: Carbon Accounting Expert
type: toggle
description: Activate deep carbon accounting expertise (GHG Protocol, ISO 14064)
---
You now have deep expertise in carbon accounting and GHG reporting.
## Standards
- GHG Protocol Corporate Standard
- ISO 14064-1:2018
- TCFD for climate-related financial disclosure
## Reference files in this skill folder
- `standards/ghg-protocol-summary.md`
- `standards/iso14064-notes.md`/skill → Show skill grid (Telegram inline keyboard)
/skill help → List all skills
/skill <name> → Show skill info
/skill <name> <prompt> → Run prompt skill with input
/skill <name> on → Enable toggle skill
/skill <name> off → Disable toggle skill
Toggle Skills in Action:
When a toggle skill is on, its skill.md content is injected into the prompt under --- ACTIVE SKILLS --- section. This persists across messages until explicitly turned off.
Action Skills:
Action skills execute a script (run.py or run.sh) and return the output.
Prompt Skills:
Prompt skills route user input to a specific backend or workflow (e.g., codex routes to Codex CLI).
HASHI includes a built-in task scheduler for automated agent actions:
Heartbeats are periodic checks that run at fixed intervals:
{
"id": "email-check",
"enabled": true,
"agent": "hashiko",
"interval_seconds": 1800,
"prompt": "Check my email for urgent messages and summarize",
"action": "enqueue_prompt"
}Common Use Cases:
- Email monitoring
- Calendar reminders
- System health checks
- Market/news updates
Cron jobs run at specific times (HH:MM format):
{
"id": "morning-briefing",
"enabled": true,
"agent": "hashiko",
"time": "08:00",
"prompt": "Provide morning briefing: weather, calendar, top news",
"action": "enqueue_prompt"
}Common Use Cases:
- Daily reports
- Scheduled backups
- Time-sensitive reminders
Jobs can invoke skills instead of prompts:
{
"id": "daily-backup",
"enabled": true,
"agent": "coder",
"time": "03:00",
"action": "skill:backup_workspace",
"args": ""
}Via Telegram:
/jobs → Show all jobs with action buttons
/skill cron → Cron job management (list, enable/disable, run now)
/skill heartbeat → Heartbeat job management (list, enable/disable, run now)
Via tasks.json:
{
"heartbeats": [
{ "id": "...", "enabled": true, "agent": "...", ... }
],
"crons": [
{ "id": "...", "enabled": true, "agent": "...", ... }
]
}HASHI's adapter system provides a unified interface to multiple AI backends:
| Backend | Engine ID | Requirements |
|---|---|---|
| Gemini CLI | gemini-cli |
gemini CLI installed and authenticated |
| Claude CLI | claude-cli |
claude CLI installed and authenticated |
| Codex CLI | codex-cli |
codex CLI installed and authenticated |
| OpenRouter API | openrouter-api |
API key in secrets.json |
All adapters inherit from BaseBackendAdapter (adapters/base.py):
class BaseBackendAdapter:
async def send_request(self, messages, tools, thinking, stream_callback):
"""Send request to backend, stream response"""
async def cancel_request(self):
"""Cancel in-flight request"""Key Features:
- Streaming support (token-by-token)
- Tool use (file operations, web search, etc.)
- Thinking mode (extended reasoning)
- Graceful cancellation
CLI backends spawn subprocess and communicate via stdin/stdout:
- No OAuth tokens stored
- Uses local CLI authentication (Google account, Anthropic API, OpenAI API)
- Full tool support
- Conversation memory managed by CLI
OpenRouter adapter uses HTTP API:
- Requires
openrouter_key(or<agent_name>_openrouter_key) insecrets.json - Supports multiple models via
modelparameter - Stateless (HASHI manages conversation history)
- Tool execution layer — enable in
agents.jsonwith atoolskey:
{
"engine": "openrouter-api",
"model": "anthropic/claude-sonnet-4.6",
"tools": {
"allowed": ["bash", "file_read", "file_write", "file_list", "apply_patch",
"web_search", "web_fetch", "http_request",
"process_list", "process_kill", "telegram_send"],
"max_loops": 15
}
}Available tools:
| Tool | Description |
|---|---|
bash |
Run shell commands (sandboxed to workspace, timeout + blocklist controls) |
file_read |
Read files with offset/limit pagination |
file_write |
Write/create files (size-capped, parent dirs auto-created) |
file_list |
List directories with optional glob filter and recursive mode |
apply_patch |
Apply unified diff patches to files (dry-run validated before apply) |
process_list |
List running processes filtered by name (requires psutil) |
process_kill |
Send SIGTERM or SIGKILL to a process by PID |
telegram_send |
Send Telegram messages by chat_id or HASHI agent_id |
http_request |
Arbitrary HTTP requests (GET/POST/PUT/DELETE/PATCH) for external APIs |
web_search |
Brave Search API (requires brave_api_key in secrets.json) |
web_fetch |
Fetch any URL and return content as Markdown |
No tools key in config = fully backward compatible, tools disabled.
HASHI includes a vector-based memory system for long-term context retrieval:
| Type | Storage | Lifetime |
|---|---|---|
| Short-term | In-process (agent runtime) | Current session |
| Transcript | memory/<agent>_transcript.json |
Permanent, daily rollover |
| Long-term | memory/<agent>_memory.json |
User-controlled |
| Vector Index | memory/<agent>_memory_index.json |
Auto-synced with long-term |
-
Bridge stores memory automatically — relevant turns are embedded and saved to
bridge_memory.sqlitein the agent's workspace. -
Memory is vectorized:
- Text embedded using BGE-M3 (local ONNX) when available, falling back to hash-based similarity.
- Vector + text stored in
bridge_memory.sqlite(memory_vec/turns_vectables).
-
Context assembly retrieves relevant memories:
- Current user message is vectorized at request time.
- Top-K similar memories retrieved via cosine similarity (
sqlite-vec). - Injected into prompt under
--- RELEVANT LONG-TERM MEMORY ---.
-
Memory skill (
/skill recall) — toggle bridge auto-recall: if ON, recent continuity is restored once after an unexpected restart (not after/new).
Memory is bridge-managed. There are no
/remember//recall//forgetslash commands — memory is automatic.
Every agent request includes:
--- SYSTEM IDENTITY ---
{agent.md contents}
--- ACTIVE SKILLS ---
{active toggle skills}
--- RELEVANT LONG-TERM MEMORY ---
{top 3 retrieved memories}
--- RECENT CONTEXT ---
{last 10 conversation turns}
--- NEW REQUEST ---
{user message}
The /handoff command generates a context restoration prompt for recovering work after conversation compression or session loss:
Use Cases:
- Agent conversation hit token limit and compressed
- Switching to a new agent mid-project
- Resuming work after system restart
How It Works:
User: /handoff
Agent: [Generates comprehensive project summary]
--- HANDOFF CONTEXT ---
Project: Building a web scraper for research papers
Status: Parser module complete, need to add citation extraction
Files: src/parser.py (500 lines), tests/ (3 files)
Next: Implement citation regex patterns
Dependencies: beautifulsoup4, requests
---
User copies this output and sends to a new agent:
User: [Paste handoff context]
Continue building the citation extractor...
New agent picks up exactly where the previous left off.
Defines your agents:
{
"global": {
"authorized_id": 123456789,
"whatsapp": {
"enabled": false,
"allowed_numbers": [],
"default_agent": "hashiko"
}
},
"agents": [
{
"name": "hashiko",
"display_name": "Hashiko",
"engine": "gemini-cli",
"model": "gemini-3-flash",
"system_md": "workspaces/hashiko/agent.md",
"workspace_dir": "workspaces/hashiko",
"is_active": true
}
]
}Stores API keys and tokens:
{
"hashiko": "your_telegram_bot_token",
"openrouter_key": "sk-or-v1-...",
"authorized_telegram_id": 123456789
}Defines scheduled jobs:
{
"heartbeats": [
{
"id": "check-email",
"enabled": true,
"agent": "hashiko",
"interval_seconds": 1800,
"prompt": "Check email for urgent messages"
}
],
"crons": [
{
"id": "morning-brief",
"enabled": true,
"agent": "hashiko",
"time": "08:00",
"prompt": "Morning briefing: weather, calendar, news"
}
]
}Connect multiple agents to one WhatsApp account:
- Configure WhatsApp routing in each agent's config:
{
"name": "coder",
"whatsapp_enabled": true,
"whatsapp_routing": {
"keywords": ["code", "debug", "fix"],
"priority": 10
}
}- Use
/agent <name>to manually switch agents - Messages auto-route based on keywords and priority
Agents can switch backends mid-conversation:
User: Switch to Codex for the next task
Agent: [Switches to codex-cli backend]
Configured in agent as:
{
"name": "flex-agent",
"engine": "flexible",
"default_backend": "gemini-cli",
"fallback_backends": ["claude-cli", "codex-cli"]
}Enable external API access:
./bridge-u.sh --api-gatewayExposes REST API on http://localhost:18801:
POST /api/chat
{
"agent": "hashiko",
"message": "Hello",
"user_id": "external_user_123"
}
| Log | Location | Contents |
|---|---|---|
| Main orchestrator | logs/bridge_launch.log |
Orchestrator startup, agent launches, errors |
| Workbench | state/workbench/logs/ |
Workbench server logs |
| Onboarding | onboarding_crash.log |
Onboarding errors |
Enable verbose logging:
export BRIDGE_DEBUG=1
./bridge-u.sh"bridge-u-f is already running"
- Another instance is active
- Kill it:
./kill-sessions.sh(Linux) orkill_bridge_u_f_sessions.bat(Windows)
"No CLI backends detected"
- Install Gemini/Claude/Codex CLI
- Or provide OpenRouter API key during onboarding
Telegram bot not responding
- Check
telegram_bot_tokeninsecrets.json - Verify bot token with @BotFather
- Check
authorized_idmatches your Telegram user ID
WhatsApp QR code not showing / process exits immediately
- Do NOT run
link_whatsapp.pydirectly — it opens an interactive session that hangs or exits without showing the QR - Use
bash scripts/run_whatsapp_link.shinstead, then open/tmp/wa_link_qr.png - Or ask your HASHI agent to set up WhatsApp — it will send the QR image to you via Telegram
tui.py provides a terminal-first interface built with Textual:
python tui.pyPanels:
- Log panel (upper ~80%) — real-time stdout/stderr from the bridge process, auto-scroll
- Chat input bar (lower ~20%) — send messages to any active agent via HTTP API Gateway
- Status bar — current agent, backend, bridge uptime, gateway reachability
- Agent selector — hotkey to switch which agent receives your chat input
main.py remains unchanged. Graceful degradation: if API Gateway is unavailable, chat is disabled and logs still stream.
HASHI v2.0 is a working prototype built entirely through AI-assisted development ("Vibe-Coding"). While functional and field-tested by the author, it is not production-ready.
Known Limitations:
- Bugs - Expect edge cases and unexpected behavior
- Error Handling - Some error messages may be cryptic
- Performance - Not optimized for high-volume usage
- Security - Local-only deployment recommended; API Gateway has no auth
- Platform Support - Tested on Windows and Linux only; macOS untested
Use with Caution:
- Keep backups of
agents.json,secrets.json, andmemory/files - Do not expose API Gateway to public internet without proper authentication
- Test thoroughly before relying on scheduled jobs for critical tasks
- Review agent outputs for sensitive information before sharing
Reporting Issues: If you encounter bugs or unexpected behavior, please report them on the GitHub Issues page with:
- Your OS and Python version
- Backend(s) you're using (Gemini/Claude/Codex/OpenRouter)
- Relevant log excerpts from
logs/bridge_launch.log - Steps to reproduce
Release Date: March 23, 2026
This marks the v2.0.0 release of HASHI — a major milestone delivering the complete v2 roadmap.
What's Included (v2.0.0):
- ✅ Multi-language onboarding (9 languages)
- ✅ Support for 4 backends (Gemini CLI, Claude CLI, Codex CLI, OpenRouter)
- ✅ Telegram + WhatsApp + Workbench transports
- ✅ Skills system (action, prompt, toggle)
- ✅ Job scheduler (heartbeats + cron)
- ✅ Memory system (vector-based retrieval)
- ✅ Handoff context recovery
- ✅ Multi-agent workspace management
- ✅ Flex/fixed backend switching —
/backendswitches CLI ↔ OpenRouter mid-conversation - ✅ TUI interface —
tui.pysplit-panel terminal UI (log stream + chat input) - ✅ Tool execution layer — 11 built-in tools for OpenRouter agents (bash, file ops, web, APIs)
- ✅ /dream skill — nightly AI memory consolidation with undo snapshot
- ✅ Process-tree stop —
/stopkills entire subprocess tree, no zombie processes - ✅ /retry persistence — resends last prompt or reruns last response
Coming in Future Versions:
- Enhanced security (API Gateway authentication)
- Mobile app (iOS/Android)
- Cloud deployment options
- Expanded skill library
- Performance optimizations
- Voice-first interfaces
HASHI is released under the MIT License.
You are free to use, modify, and distribute this software. See LICENSE file for full terms.
- GitHub Issues: Report bugs and request features
- Discussions: Ask questions and share tips
- Author: HASHI Team
Built with Vision. Written by AI. Directed by Human. HASHI - The Bridge to the Future of AI Collaboration.