Skip to content

Bazza1982/HASHI

Folders and files

NameName
Last commit message
Last commit date

Latest commit

 

History

67 Commits
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 

Repository files navigation

HASHI

Status (v2.0.0): All v2 roadmap delivered. Tool execution layer, browser automation, Pack & Go USB deployment, TUI, vector memory, and more. Changelog: see CHANGELOG.md · Roadmap: see docs/ROADMAP.md.

About

HASHI is a privacy-first, compliant alternative to OpenClaw designed for a more secure agentic experience. It prioritizes your security by never requiring or storing your Claude, Codex, or Gemini OAuth authentication tokens, ensuring your setup remains fully compliant with current Terms of Service.

Beyond safety, HASHI introduces practical features built for real-world workflows:

• Context Recovery: Use the /handoff command to instantly restore project context when work is lost during conversation compression. • Multi-Agent Connectivity: Connect and switch between multiple specialized agents through a single WhatsApp account.

HASHI is built to evolve. We are committed to adding the tools and functions the community needs to make AI collaboration safer and more productive.

Project History & Name Origin

The Name: HASHI (ハシ / 橋)

HASHI means "bridge" in Japanese (橋).

The kanji 橋 combines:

  • 木 (tree/wood) - the natural foundation
  • 喬 (tall) - reaching upward, connecting heights

Project Philosophy:

「橋」は「知」を繋ぎ、「知」は未来を拓く。 The Bridge connects Intellect; Intellect opens the future.

Just as bridges connect distant shores, HASHI connects:

  • Human creativity ↔ AI capabilities
  • Multiple AI systems ↔ Unified interface
  • Present workflows ↔ Future possibilities

Authorship & Credits

HASHI was conceived and designed from scratch by Barry Li (https://barryli.phd), a PhD candidate at the University of Newcastle, Australia.

Coming from a non-technical background with no prior IT experience, Barry built this project through "Vibe-Coding" — every line of code was generated by AI (Claude, Gemini, and Codex) and cross-reviewed by AI. Barry's role was that of System Architect and Director, providing the vision, operational judgment, and iterative direction. This marks Barry's first publishable AI project.

This project would not exist without OpenClaw by Peter Steinberg and the OpenClaw contributors. OpenClaw provided both a cutting-edge AI agent framework and the inspirational ideas that shaped this system. Deep thanks to Peter and all OpenClaw contributors.

Development Codename: bridge-u-f

Throughout the codebase, you'll see references to bridge-u-f - this was the internal development codename used during the project's evolution from OpenClaw.

Why "bridge"? The core metaphor: HASHI connects human intent with AI intelligence, serving as a bridge between natural language requests and computational power.

Why "u-f"?

  • u = universal (multi-backend, multi-agent)
  • f = flexible (adaptive, modular, extensible)

Quick Technical Overview

HASHI is a universal multi-agent orchestration platform that runs entirely locally. It routes user requests to AI backends (Claude CLI, Codex CLI, Gemini CLI, or OpenRouter API) through a flexible adapter system, eliminating the need to store sensitive OAuth tokens.

Core Components:

  • Onboarding - Multi-language guided setup to create your first agent
  • Workbench - Local web UI (React + Vite) for multi-agent conversations
  • Orchestrator - Central runtime managing agents, memory, skills, and scheduling
  • Transports - Connect via Telegram, WhatsApp, or Workbench
  • Skills - Modular capabilities (prompts, toggles, actions) that extend agents
  • Jobs - Automated scheduling (heartbeats + cron) for periodic agent tasks

What makes HASHI different:

  1. No Token Storage - Uses CLI backends (gemini, claude, codex) with local authentication, not stored tokens
  2. Multi-Agent, Single Interface - Chat with multiple specialized agents through one WhatsApp or Telegram account
  3. Context Recovery - /handoff command instantly restores project context after compression
  4. Tool Execution Layer - OpenRouter agents can take real local actions: run bash commands, read/write files, search the web, call external APIs, and more
  5. Flex/Fixed Mode Switching - Agents can switch between CLI backends and OpenRouter mid-conversation via /backend
  6. TUI Interface - tui.py provides a split-panel terminal UI for log monitoring and agent chat without a browser
  7. Pack & Go - Build a self-contained USB for Windows or macOS; recipients just plug in and double-click, no setup required
  8. Vibe-Coded - Every line written by AI, reviewed by AI, directed by human vision

Project Status

  • v2.0.0 — All v2 roadmap outcomes delivered. Tool execution layer (11 tools), browser automation (Playwright), Pack & Go USB deployment (Windows + macOS), TUI, vector memory, /dream skill, /memory command. See CHANGELOG.md.

Installation

See INSTALL.md for detailed installation instructions.

🚀 Pack & Go — USB Zero-Install (Recommended for sharing)

Run HASHI on any Windows or macOS machine straight from a USB drive — no Python installation, no pip install, nothing to set up on the target machine.

Windows:

# On your machine (with internet):
windows\prepare_usb.bat        # builds D:\HASHI9 with embedded Python + all deps

# On any Windows PC:
HASHI9\windows\start_tui.bat  # double-click to launch

macOS:

# On your Mac (with internet):
bash mac/prepare_usb.sh        # builds USB with portable Python + all deps

# On any Mac:
# Double-click HASHI9/mac/start_tui.command in Finder

Note (macOS): First run may trigger Gatekeeper. Right-click the .command file → Open → Open anyway.


Quick Start (Developer / Standard Install)

# Clone the repository
git clone https://github.com/Bazza1982/HASHI.git
cd HASHI

# Install Python dependencies
pip install -r requirements.txt

# Run onboarding (creates your first agent)
python onboarding/onboarding_main.py

# Start HASHI
./bin/bridge-u.sh         # macOS / Linux
# or
bin\bridge-u.bat          # Windows
# or
python main.py            # Any platform

Prerequisites

  • Python 3.10+ (not required for Pack & Go USB deployments)
  • At least one AI backend:
    • [Gemini CLI] (gemini)
    • [Claude Code] (claude)
    • [Codex CLI] (codex)
    • Or an OpenRouter API key
  • Optional: Node.js 18+ (for Workbench UI)

Comprehensive Technical Details

Architecture

HASHI uses a Universal Orchestrator pattern where a single Python process manages multiple concurrent agent runtimes:

┌─────────────────────────────────────────────────────────────┐
│                   Universal Orchestrator                     │
│                                                              │
│  ┌────────────────┐  ┌────────────────┐  ┌────────────────┐│
│  │ Agent Runtime  │  │ Agent Runtime  │  │ Agent Runtime  ││
│  │   (Hashiko)    │  │   (Assistant)  │  │   (Coder)     ││
│  └────────────────┘  └────────────────┘  └────────────────┘│
│          ▲                  ▲                  ▲            │
│          └──────────────────┴──────────────────┘            │
│                          │                                  │
│  ┌───────────────────────────────────────────────────────┐ │
│  │        Flexible Backend Manager                       │ │
│  │  ┌──────────┐ ┌──────────┐ ┌──────────┐ ┌──────────┐│ │
│  │  │ Gemini   │ │ Claude   │ │ Codex    │ │OpenRouter││ │
│  │  │ Adapter  │ │ Adapter  │ │ Adapter  │ │ Adapter  ││ │
│  │  └──────────┘ └──────────┘ └──────────┘ └──────────┘│ │
│  └───────────────────────────────────────────────────────┘ │
│                          ▲                                  │
│  ┌───────────────────────────────────────────────────────┐ │
│  │              Transport Layer                          │ │
│  │    [Telegram] [WhatsApp] [Workbench API]             │ │
│  └───────────────────────────────────────────────────────┘ │
│                                                              │
│  ┌────────────┐  ┌────────────┐  ┌────────────────────┐   │
│  │   Skill    │  │  Scheduler │  │   Memory System    │   │
│  │  Manager   │  │ (Jobs/Cron)│  │ (Vector + Recall)  │   │
│  └────────────┘  └────────────┘  └────────────────────┘   │
└─────────────────────────────────────────────────────────────┘

Key Design Principles:

  • Backend Agnostic - Agents work with any supported backend; you can switch mid-conversation
  • Shared Sessions - Telegram and Workbench share the same agent queues and memory
  • Explicit over Automatic - Skills, jobs, and features are user-activated, never magic
  • Single Instance - File-based locking prevents multiple HASHI processes from conflicting

File Structure

hashi/
├── main.py                    # Orchestrator entry point
├── agents.json                # Agent definitions (name, backend, system prompt)
├── secrets.json               # API keys (OpenRouter, etc.)
├── tasks.json                 # Heartbeat + cron job definitions
├── onboarding/                # Multi-language guided setup
│   ├── onboarding_main.py
│   └── languages/             # 9 languages (en, ja, zh-Hans, zh-Hant, ko, de, fr, ru, ar)
├── orchestrator/              # Core orchestration logic
│   ├── agent_runtime.py       # Individual agent runtime (fixed backend)
│   ├── flexible_agent_runtime.py  # Flex agent (switchable backend)
│   ├── scheduler.py           # Heartbeat + cron job runner
│   ├── skill_manager.py       # Skills system
│   ├── bridge_memory.py       # Context assembly + memory retrieval
│   ├── memory_index.py        # Vector similarity search
│   ├── workbench_api.py       # Workbench REST API server
│   └── api_gateway.py         # External API gateway (optional)
├── adapters/                  # Backend adapters
│   ├── base.py                # Abstract base adapter
│   ├── gemini_cli.py
│   ├── claude_cli.py
│   ├── codex_cli.py
│   └── openrouter_api.py
├── transports/                # Communication channels
│   ├── whatsapp.py            # WhatsApp transport (whatsapp-web.js)
│   └── chat_router.py         # Message routing logic
├── skills/                    # Skill library
│   ├── README.md
│   └── [skill_name]/
│       ├── skill.md           # Skill definition
│       └── run.py             # Action script (optional)
├── workbench/                 # Local web UI
│   ├── server/                # Node.js API server
│   └── src/                   # React frontend
├── memory/                    # Agent memory files
├── state/                     # Runtime state
├── logs/                      # Log files
└── workspaces/                # Agent working directories

Onboarding System

The onboarding program (onboarding/onboarding_main.py) provides a guided, multi-language setup experience:

Features:

  • 9 Languages - English, Japanese, Simplified Chinese, Traditional Chinese, Korean, German, French, Russian, Arabic
  • Environment Detection - Automatically detects installed CLI backends (Gemini, Claude, Codex)
  • Fallback to OpenRouter - If no CLI is detected, prompts for OpenRouter API key
  • Workbench Auto-Launch - Optionally opens Workbench UI after setup

Onboarding Flow:

  1. Language selection
  2. Environment audit (detect Gemini/Claude/Codex CLI)
  3. If no CLI found → prompt for OpenRouter API key
  4. Display AI Ethics & Human Well-being Statement
  5. Create first agent in agents.json
  6. Create secrets.json with API keys (if needed)
  7. Launch Workbench

Files Created:

  • agents.json - First agent definition
  • secrets.json - API keys (OpenRouter, Telegram, etc.)
  • .hashi_onboarding_complete - Flag file to prevent re-onboarding

Workbench

The Workbench is a local web interface for multi-agent conversations:

Architecture:

  • Frontend - React + Vite, runs on http://localhost:5173
  • Backend - Node.js Express server, runs on http://localhost:3003
  • Bridge API - Connects to orchestrator at http://127.0.0.1:18800

Features:

  • Multi-agent chat interface with agent switching
  • Real-time transcript polling
  • File and media upload support
  • System status display
  • Shared sessions with Telegram/WhatsApp

Start/Stop:

./workbench.bat              # Start workbench (Windows)
./workbench-ctl.sh start     # Start workbench (Linux)
./stop_workbench.bat         # Stop workbench

How It Works:

  1. Workbench frontend polls orchestrator /api/agents for agent list
  2. User sends message through Workbench → POST /api/agents/{name}/send
  3. Orchestrator queues message in agent runtime (same queue as Telegram)
  4. Backend processes message, streams response
  5. Workbench polls /api/agents/{name}/transcript for updates

Connections (Transports)

HASHI supports multiple communication channels through a transport layer:

Telegram

  • Default transport, enabled by default
  • Requires telegram_bot_token in secrets.json
  • Commands: /new, /stop, /reboot, /handoff, /skill, etc.
  • Supports inline keyboards, file uploads, voice messages

Setup:

  1. Create bot via @BotFather
  2. Add telegram_bot_token to secrets.json
  3. Add your Telegram user ID to agent's authorized_id in agents.json

WhatsApp

  • Uses whatsapp-web.js library
  • Requires QR code scan on first launch
  • Multi-Agent Support - Route messages to different agents using /agent <name> prefix

Setup:

⚠️ Important: Do NOT run link_whatsapp.py directly in a terminal and wait for it — it opens an interactive pairing session that never exits on its own when run that way. Use the relay method below.

Recommended method (agent-relay via Telegram):

  1. Ask your HASHI agent (e.g. Hashiko) to set up WhatsApp. She will handle it.
  2. The agent runs scripts/run_whatsapp_link.sh in the background, which starts link_whatsapp.py with --qr-image-file /tmp/wa_link_qr.png --completion-file /tmp/wa_link_result.json.
  3. When the QR PNG appears, the agent automatically sends it to you as a Telegram image.
  4. Scan the QR code from Telegram using your WhatsApp mobile app.
  5. The agent polls /tmp/wa_link_result.json and confirms when linking is complete.
  6. Session saved in wa_session/ — subsequent starts do not require a new QR scan.

Manual method (if running without an agent):

  1. Open a terminal and run: bash scripts/run_whatsapp_link.sh

  2. Wait a few seconds, then open /tmp/wa_link_qr.png in an image viewer.

  3. Scan the QR code with WhatsApp mobile app.

  4. Check /tmp/wa_link_result.json for {"status": "linked"} to confirm success.

  5. Configure routing in agent's whatsapp_routing settings.

WhatsApp Commands:

  • /agent hashiko - Switch to "hashiko" agent
  • /agents - List available agents
  • Normal messages → routed to current active agent

Workbench

  • Local web UI (see Workbench section above)
  • No authentication required (localhost only)
  • Shared sessions with Telegram/WhatsApp

Commands

HASHI agents respond to both natural language and structured commands:

Universal Commands (All Agents)

Command Description
/new Start a fresh session (clears continuity)
/stop Cancel current processing
/reboot [min|max|#] Hot restart agents — shows button menu when called without args
/status [full] Show agent status, backend info
/handoff Restore continuity from recent transcript
/skill Browse and run skills (inline keyboard)
/help Show available commands

Session & Mode Commands

Command Description
/mode [fixed|flex] Switch between fixed CLI session and flex multi-backend mode (button menu)
/backend [engine] Switch backend — shows inline button picker (flex mode only)
/model View/change active model (inline keyboard)
/effort View/change effort level — Claude/Codex only (inline keyboard)
/fyi [prompt] Refresh bridge environment awareness
/retry Resend last response or re-run last prompt (button menu)
/park [chat|delete <n>] Save or list parked topics
/load <n> Restore a parked topic
/sys <n> [on|off|save|output] Manage system prompt slots; /sys output <n> returns raw text
/clear Clear media directory and reset session state

Toggles & Settings

Command Description
/verbose [on|off] Toggle detailed long-task status (button menu)
/think [on|off] Toggle thinking trace display (button menu)
/active [on|off|minutes] Toggle proactive heartbeat (button menu)
/whisper [small|medium|large] Set local voice transcription model (button menu)
/voice [on|off|menu|use <alias>] Control bridge voice replies (inline keyboard)
/credit Show API credit/usage (OpenRouter only)

Lifecycle Commands

Command Description
/start [all|<name>] Start a stopped agent — button menu or all to start all
/terminate Shut down this agent gracefully

WhatsApp Commands

Command Description
/wa_on Start WhatsApp transport
/wa_off Stop WhatsApp transport
/wa_send <+number> <msg> Send a WhatsApp message

Job Commands

Command Description
/jobs Show cron and heartbeat jobs with action buttons
/skill cron Full cron job management
/skill heartbeat Full heartbeat job management

Skills System

Skills are modular capabilities that extend agent functionality. Every skill is defined by a skill.md file with frontmatter + instructions.

Skill Types

Type Behavior Example
Action One-shot execution, runs a script restart_pc, system_status
Prompt Routes user input to a backend/tool codex, gemini, claude
Toggle Injects instructions while active TTS, carbon-accounting, academic-writing

Skill Structure

Each skill lives in skills/<skill_id>/:

skills/
  carbon-accounting/
    skill.md              # Frontmatter + instructions
    standards/
      ghg-protocol-summary.md
      iso14064-notes.md

skill.md Example:

---
id: carbon-accounting
name: Carbon Accounting Expert
type: toggle
description: Activate deep carbon accounting expertise (GHG Protocol, ISO 14064)
---

You now have deep expertise in carbon accounting and GHG reporting.

## Standards
- GHG Protocol Corporate Standard
- ISO 14064-1:2018
- TCFD for climate-related financial disclosure

## Reference files in this skill folder
- `standards/ghg-protocol-summary.md`
- `standards/iso14064-notes.md`

Using Skills

/skill                          → Show skill grid (Telegram inline keyboard)
/skill help                     → List all skills
/skill <name>                   → Show skill info
/skill <name> <prompt>          → Run prompt skill with input
/skill <name> on                → Enable toggle skill
/skill <name> off               → Disable toggle skill

Toggle Skills in Action: When a toggle skill is on, its skill.md content is injected into the prompt under --- ACTIVE SKILLS --- section. This persists across messages until explicitly turned off.

Action Skills: Action skills execute a script (run.py or run.sh) and return the output.

Prompt Skills: Prompt skills route user input to a specific backend or workflow (e.g., codex routes to Codex CLI).


Job System (Scheduler)

HASHI includes a built-in task scheduler for automated agent actions:

Heartbeats

Heartbeats are periodic checks that run at fixed intervals:

{
  "id": "email-check",
  "enabled": true,
  "agent": "hashiko",
  "interval_seconds": 1800,
  "prompt": "Check my email for urgent messages and summarize",
  "action": "enqueue_prompt"
}

Common Use Cases:

  • Email monitoring
  • Calendar reminders
  • System health checks
  • Market/news updates

Cron Jobs

Cron jobs run at specific times (HH:MM format):

{
  "id": "morning-briefing",
  "enabled": true,
  "agent": "hashiko",
  "time": "08:00",
  "prompt": "Provide morning briefing: weather, calendar, top news",
  "action": "enqueue_prompt"
}

Common Use Cases:

  • Daily reports
  • Scheduled backups
  • Time-sensitive reminders

Skill-Based Jobs

Jobs can invoke skills instead of prompts:

{
  "id": "daily-backup",
  "enabled": true,
  "agent": "coder",
  "time": "03:00",
  "action": "skill:backup_workspace",
  "args": ""
}

Managing Jobs

Via Telegram:

/jobs                       → Show all jobs with action buttons
/skill cron                 → Cron job management (list, enable/disable, run now)
/skill heartbeat            → Heartbeat job management (list, enable/disable, run now)

Via tasks.json:

{
  "heartbeats": [
    { "id": "...", "enabled": true, "agent": "...", ... }
  ],
  "crons": [
    { "id": "...", "enabled": true, "agent": "...", ... }
  ]
}

Backend Adapters

HASHI's adapter system provides a unified interface to multiple AI backends:

Supported Backends

Backend Engine ID Requirements
Gemini CLI gemini-cli gemini CLI installed and authenticated
Claude CLI claude-cli claude CLI installed and authenticated
Codex CLI codex-cli codex CLI installed and authenticated
OpenRouter API openrouter-api API key in secrets.json

Adapter Architecture

All adapters inherit from BaseBackendAdapter (adapters/base.py):

class BaseBackendAdapter:
    async def send_request(self, messages, tools, thinking, stream_callback):
        """Send request to backend, stream response"""

    async def cancel_request(self):
        """Cancel in-flight request"""

Key Features:

  • Streaming support (token-by-token)
  • Tool use (file operations, web search, etc.)
  • Thinking mode (extended reasoning)
  • Graceful cancellation

CLI Backends (Gemini, Claude, Codex)

CLI backends spawn subprocess and communicate via stdin/stdout:

  • No OAuth tokens stored
  • Uses local CLI authentication (Google account, Anthropic API, OpenAI API)
  • Full tool support
  • Conversation memory managed by CLI

OpenRouter Backend

OpenRouter adapter uses HTTP API:

  • Requires openrouter_key (or <agent_name>_openrouter_key) in secrets.json
  • Supports multiple models via model parameter
  • Stateless (HASHI manages conversation history)
  • Tool execution layer — enable in agents.json with a tools key:
{
  "engine": "openrouter-api",
  "model": "anthropic/claude-sonnet-4.6",
  "tools": {
    "allowed": ["bash", "file_read", "file_write", "file_list", "apply_patch",
                "web_search", "web_fetch", "http_request",
                "process_list", "process_kill", "telegram_send"],
    "max_loops": 15
  }
}

Available tools:

Tool Description
bash Run shell commands (sandboxed to workspace, timeout + blocklist controls)
file_read Read files with offset/limit pagination
file_write Write/create files (size-capped, parent dirs auto-created)
file_list List directories with optional glob filter and recursive mode
apply_patch Apply unified diff patches to files (dry-run validated before apply)
process_list List running processes filtered by name (requires psutil)
process_kill Send SIGTERM or SIGKILL to a process by PID
telegram_send Send Telegram messages by chat_id or HASHI agent_id
http_request Arbitrary HTTP requests (GET/POST/PUT/DELETE/PATCH) for external APIs
web_search Brave Search API (requires brave_api_key in secrets.json)
web_fetch Fetch any URL and return content as Markdown

No tools key in config = fully backward compatible, tools disabled.


Memory System

HASHI includes a vector-based memory system for long-term context retrieval:

Memory Types

Type Storage Lifetime
Short-term In-process (agent runtime) Current session
Transcript memory/<agent>_transcript.json Permanent, daily rollover
Long-term memory/<agent>_memory.json User-controlled
Vector Index memory/<agent>_memory_index.json Auto-synced with long-term

How It Works

  1. Bridge stores memory automatically — relevant turns are embedded and saved to bridge_memory.sqlite in the agent's workspace.

  2. Memory is vectorized:

    • Text embedded using BGE-M3 (local ONNX) when available, falling back to hash-based similarity.
    • Vector + text stored in bridge_memory.sqlite (memory_vec / turns_vec tables).
  3. Context assembly retrieves relevant memories:

    • Current user message is vectorized at request time.
    • Top-K similar memories retrieved via cosine similarity (sqlite-vec).
    • Injected into prompt under --- RELEVANT LONG-TERM MEMORY ---.
  4. Memory skill (/skill recall) — toggle bridge auto-recall: if ON, recent continuity is restored once after an unexpected restart (not after /new).

Memory is bridge-managed. There are no /remember / /recall / /forget slash commands — memory is automatic.

Memory in Prompts

Every agent request includes:

--- SYSTEM IDENTITY ---
{agent.md contents}

--- ACTIVE SKILLS ---
{active toggle skills}

--- RELEVANT LONG-TERM MEMORY ---
{top 3 retrieved memories}

--- RECENT CONTEXT ---
{last 10 conversation turns}

--- NEW REQUEST ---
{user message}

Handoff System

The /handoff command generates a context restoration prompt for recovering work after conversation compression or session loss:

Use Cases:

  • Agent conversation hit token limit and compressed
  • Switching to a new agent mid-project
  • Resuming work after system restart

How It Works:

User: /handoff
Agent: [Generates comprehensive project summary]

--- HANDOFF CONTEXT ---
Project: Building a web scraper for research papers
Status: Parser module complete, need to add citation extraction
Files: src/parser.py (500 lines), tests/ (3 files)
Next: Implement citation regex patterns
Dependencies: beautifulsoup4, requests
---

User copies this output and sends to a new agent:

User: [Paste handoff context]
       Continue building the citation extractor...

New agent picks up exactly where the previous left off.


Configuration Files

agents.json

Defines your agents:

{
  "global": {
    "authorized_id": 123456789,
    "whatsapp": {
      "enabled": false,
      "allowed_numbers": [],
      "default_agent": "hashiko"
    }
  },
  "agents": [
    {
      "name": "hashiko",
      "display_name": "Hashiko",
      "engine": "gemini-cli",
      "model": "gemini-3-flash",
      "system_md": "workspaces/hashiko/agent.md",
      "workspace_dir": "workspaces/hashiko",
      "is_active": true
    }
  ]
}

secrets.json

Stores API keys and tokens:

{
  "hashiko": "your_telegram_bot_token",
  "openrouter_key": "sk-or-v1-...",
  "authorized_telegram_id": 123456789
}

tasks.json

Defines scheduled jobs:

{
  "heartbeats": [
    {
      "id": "check-email",
      "enabled": true,
      "agent": "hashiko",
      "interval_seconds": 1800,
      "prompt": "Check email for urgent messages"
    }
  ],
  "crons": [
    {
      "id": "morning-brief",
      "enabled": true,
      "agent": "hashiko",
      "time": "08:00",
      "prompt": "Morning briefing: weather, calendar, news"
    }
  ]
}

Advanced Features

Multi-Agent WhatsApp Routing

Connect multiple agents to one WhatsApp account:

  1. Configure WhatsApp routing in each agent's config:
{
  "name": "coder",
  "whatsapp_enabled": true,
  "whatsapp_routing": {
    "keywords": ["code", "debug", "fix"],
    "priority": 10
  }
}
  1. Use /agent <name> to manually switch agents
  2. Messages auto-route based on keywords and priority

Flexible Backend Switching

Agents can switch backends mid-conversation:

User: Switch to Codex for the next task
Agent: [Switches to codex-cli backend]

Configured in agent as:

{
  "name": "flex-agent",
  "engine": "flexible",
  "default_backend": "gemini-cli",
  "fallback_backends": ["claude-cli", "codex-cli"]
}

API Gateway (Optional)

Enable external API access:

./bridge-u.sh --api-gateway

Exposes REST API on http://localhost:18801:

POST /api/chat
{
  "agent": "hashiko",
  "message": "Hello",
  "user_id": "external_user_123"
}

⚠️ Security Warning: API Gateway has no authentication. Use firewall rules or reverse proxy for production.


Debugging and Logs

Log Files

Log Location Contents
Main orchestrator logs/bridge_launch.log Orchestrator startup, agent launches, errors
Workbench state/workbench/logs/ Workbench server logs
Onboarding onboarding_crash.log Onboarding errors

Debug Mode

Enable verbose logging:

export BRIDGE_DEBUG=1
./bridge-u.sh

Common Issues

"bridge-u-f is already running"

  • Another instance is active
  • Kill it: ./kill-sessions.sh (Linux) or kill_bridge_u_f_sessions.bat (Windows)

"No CLI backends detected"

  • Install Gemini/Claude/Codex CLI
  • Or provide OpenRouter API key during onboarding

Telegram bot not responding

  • Check telegram_bot_token in secrets.json
  • Verify bot token with @BotFather
  • Check authorized_id matches your Telegram user ID

WhatsApp QR code not showing / process exits immediately

  • Do NOT run link_whatsapp.py directly — it opens an interactive session that hangs or exits without showing the QR
  • Use bash scripts/run_whatsapp_link.sh instead, then open /tmp/wa_link_qr.png
  • Or ask your HASHI agent to set up WhatsApp — it will send the QR image to you via Telegram

TUI Interface

tui.py provides a terminal-first interface built with Textual:

python tui.py

Panels:

  • Log panel (upper ~80%) — real-time stdout/stderr from the bridge process, auto-scroll
  • Chat input bar (lower ~20%) — send messages to any active agent via HTTP API Gateway
  • Status bar — current agent, backend, bridge uptime, gateway reachability
  • Agent selector — hotkey to switch which agent receives your chat input

main.py remains unchanged. Graceful degradation: if API Gateway is unavailable, chat is disabled and logs still stream.


⚠️ Important Warnings

This is Version 2.0.0

HASHI v2.0 is a working prototype built entirely through AI-assisted development ("Vibe-Coding"). While functional and field-tested by the author, it is not production-ready.

Known Limitations:

  • Bugs - Expect edge cases and unexpected behavior
  • Error Handling - Some error messages may be cryptic
  • Performance - Not optimized for high-volume usage
  • Security - Local-only deployment recommended; API Gateway has no auth
  • Platform Support - Tested on Windows and Linux only; macOS untested

Use with Caution:

  • Keep backups of agents.json, secrets.json, and memory/ files
  • Do not expose API Gateway to public internet without proper authentication
  • Test thoroughly before relying on scheduled jobs for critical tasks
  • Review agent outputs for sensitive information before sharing

Reporting Issues: If you encounter bugs or unexpected behavior, please report them on the GitHub Issues page with:

  • Your OS and Python version
  • Backend(s) you're using (Gemini/Claude/Codex/OpenRouter)
  • Relevant log excerpts from logs/bridge_launch.log
  • Steps to reproduce

Version 1.0 Release

Release Date: March 23, 2026

This marks the v2.0.0 release of HASHI — a major milestone delivering the complete v2 roadmap.

What's Included (v2.0.0):

  • ✅ Multi-language onboarding (9 languages)
  • ✅ Support for 4 backends (Gemini CLI, Claude CLI, Codex CLI, OpenRouter)
  • ✅ Telegram + WhatsApp + Workbench transports
  • ✅ Skills system (action, prompt, toggle)
  • ✅ Job scheduler (heartbeats + cron)
  • ✅ Memory system (vector-based retrieval)
  • ✅ Handoff context recovery
  • ✅ Multi-agent workspace management
  • Flex/fixed backend switching/backend switches CLI ↔ OpenRouter mid-conversation
  • TUI interfacetui.py split-panel terminal UI (log stream + chat input)
  • Tool execution layer — 11 built-in tools for OpenRouter agents (bash, file ops, web, APIs)
  • /dream skill — nightly AI memory consolidation with undo snapshot
  • Process-tree stop/stop kills entire subprocess tree, no zombie processes
  • /retry persistence — resends last prompt or reruns last response

Coming in Future Versions:

  • Enhanced security (API Gateway authentication)
  • Mobile app (iOS/Android)
  • Cloud deployment options
  • Expanded skill library
  • Performance optimizations
  • Voice-first interfaces

License

HASHI is released under the MIT License.

You are free to use, modify, and distribute this software. See LICENSE file for full terms.


Support & Community


Built with Vision. Written by AI. Directed by Human. HASHI - The Bridge to the Future of AI Collaboration.

About

A vibe-coded, privacy-first OpenClaw alternative for your AI agents. Control Gemini, Claude, and Codex via WhatsApp, Telegram, or Web without ever sharing your tokens.

Resources

License

Stars

Watchers

Forks

Packages

 
 
 

Contributors