Skip to content

JacobeZhao/SayHi

Folders and files

NameName
Last commit message
Last commit date

Latest commit

 

History

13 Commits
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 

Repository files navigation

         ██████
         ██████
         ██████
████████████████████████
████████████████████████
         ██████
         ██████
         ██████
         ██████
         ██████

SayHi

An AI agent that grows through every conversation.

PyPI Python License: MIT GitHub

Quick Start · Score System · Commands · Models · Platforms


SayHi is an open-source AI agent for your terminal. It searches the web, reads and writes files in your project, remembers what it learns, and tracks its own capability growth across sessions — getting measurably better the more you use it.

活跃度 38  ❯ 帮我搜索最近的AI进展,写个总结页面

  searching web...  →  reading page...

╭───────────────────────── SayHi ─────────────────────────╮
│ 搞定!已保存到 ai-news.html,涵盖 12 篇最新报道。        │
╰──────────────────────────────────────────────────────────╯

活跃度 38  ❯

Quick Start

pip install sayhi-agent
sayhi onboard        # interactive setup: pick model + API key
sayhi start          # start talking

macOS · Linux · Windows — Python 3.10+


What makes it different

Grows over time. SayHi tracks its own capability across 7 dimensions (tool use, memory, conversation quality, domain expertise, reliability, activity, growth rate). Run /score any time to see a detailed breakdown. Run /path to see how the scores change across sessions.

Workspace-aware. Launch sayhi start from any project folder — that directory becomes SayHi's workspace. Files it creates land there. Drop a SAYHI.md in the folder to give it project-specific context, like Claude Code's CLAUDE.md.

my-project/
├── SAYHI.md        ← SayHi reads this on startup
├── src/
└── ...

Any model, one config line. Powered by LiteLLM — switch between Claude, GPT-4o, DeepSeek, Qwen, Gemini, or a local Ollama model instantly.

Multi-session. /session work, /session personal — each session has its own conversation context. /compact summarizes history to save tokens.


Score System

╭──────────────────── Score Card · SayHi ─────────────────────╮
│  2026-05-04 12:46  ·  Session #13  +0.7 vs last             │
│                                                              │
│  能力                                                         │
│  工具精通度  ██████████░░  62.7  →                           │
│  记忆积累    █████░░░░░░░  50.0  →                           │
│  对话质量    █████░░░░░░░  50.0  →  needs more interactions  │
│  领域专长    █████░░░░░░░  50.0  →                           │
│  执行可靠性  ████████░░░░  76.7  ↑3                          │
│                                                              │
│  成长                                                         │
│  活跃度      ██░░░░░░░░░░  21.9  →  (13 sessions / 47 msgs) │
│  成长速率    █████░░░░░░░  50.0  →                           │
│                                                              │
│  综合  53.1 / 100   📖 进阶学者                               │
╰──────────────────────────────────────────────────────────────╯

8 rank tiers: 🌱 新生 → 📖 进阶学者 → 🎓 资深专家 → 🌟 传奇智者


Commands

Command Description
/score Run capability self-assessment
/path View growth score timeline
/status Profile overview — sessions, tool calls, last score
/session [name] List or switch conversation sessions
/compact Compress session history via LLM
/model [name|add] View, switch, or add a model
/voice [secs] Voice input via microphone (STT)
/clear Clear current session
/help Show all commands
/quit Exit

Supported Models

Set model in ~/.sayhi/config.yaml, or switch live with /model:

Provider Model string Env var
Anthropic (Claude) anthropic/claude-sonnet-4-6 ANTHROPIC_API_KEY
OpenAI openai/gpt-4o OPENAI_API_KEY
DeepSeek 🇨🇳 deepseek/deepseek-chat DEEPSEEK_API_KEY
通义千问 (Qwen) 🇨🇳 dashscope/qwen-max DASHSCOPE_API_KEY
智谱 AI (GLM) 🇨🇳 zhipuai/glm-4 ZHIPUAI_API_KEY
Kimi (Moonshot) 🇨🇳 moonshot/moonshot-v1-8k MOONSHOT_API_KEY
Google Gemini gemini/gemini-1.5-pro GEMINI_API_KEY
Groq (free, fast) groq/llama-3.1-70b-versatile GROQ_API_KEY
Ollama (local) ollama/qwen2.5 (none needed)
# ~/.sayhi/config.yaml
llm:
  model: "deepseek/deepseek-chat"
  api_key: "sk-..."

Or via environment variable (takes priority):

ANTHROPIC_API_KEY=sk-ant-...  sayhi start

Platforms

SayHi runs on CLI by default. Enable other platforms in ~/.sayhi/config.yaml:

Platform Install Config key
CLI built-in
HTTP API built-in http.enabled: true
Telegram pip install "sayhi-agent[telegram]" telegram.bot_token
Discord pip install "sayhi-agent[discord]" discord.bot_token
Slack pip install "sayhi-agent[slack]" slack.bot_token
飞书 / Lark pip install "sayhi-agent[feishu]" feishu.app_id
钉钉 DingTalk pip install "sayhi-agent[dingtalk]" dingtalk.app_key
企业微信 WeCom pip install "sayhi-agent[wecom]" wecom.corp_id
QQ 频道 pip install "sayhi-agent[qq]" qq.app_id

Install Options

pip install sayhi-agent                # CLI only
pip install "sayhi-agent[web]"         # + web search & page fetch
pip install "sayhi-agent[telegram]"    # + Telegram bot
pip install "sayhi-agent[voice]"       # + voice input (STT) & output (TTS)
pip install "sayhi-agent[keyring]"     # + encrypted API key storage
pip install "sayhi-agent[all]"         # everything

Configuration

sayhi onboard creates ~/.sayhi/config.yaml with all defaults. Key fields:

llm:
  model: "anthropic/claude-sonnet-4-6"
  api_key: ""        # leave empty to use env vars
  max_tokens: 4096

http:
  enabled: true
  host: "0.0.0.0"
  port: 8080
  api_key: ""        # optional auth for the REST API

Architecture

sayhi/
├── agent/
│   ├── core.py        ← LLM loop, tool dispatch
│   ├── profile.py     ← persistent session state
│   └── scorer.py      ← capability scoring engine
├── tools/
│   └── registry.py    ← bash, read/write file, web search, ...
├── memory/
│   └── provider.py    ← markdown-based persistent memory
├── platform/
│   └── adapters/      ← cli, telegram, discord, slack, ...
└── ui/
    └── banner.py      ← startup screen

License

MIT

About

No description, website, or topics provided.

Resources

Stars

Watchers

Forks

Releases

No releases published

Packages

 
 
 

Contributors