Semantic grep for your codebases. Indexes a repository into a local LanceDB store using an OpenAI-compatible embeddings endpoint, then lets you query via the CLI, an HTTP server, or as an MCP server.
- Go 1.25.4+ (see
go.mod), with CGO enabled (requires a C toolchain such asgcc). - Bundled LanceDB native library for Linux amd64 (
lib/linux_amd64) and headers (include/). - An embeddings endpoint that implements the OpenAI
/v1/embeddingsAPI, configured via~/.config/semchan/config.json. - Git (recommended; used for repo root detection and
.gitignoresupport).
- Semantic search with grep-style output, JSON output, or one-line summaries.
- Incremental indexing: tracks file hashes and only re-embeds changed files.
- Respects
.gitignoreand skips common build/dependency folders. - Local storage in XDG cache (typically
~/.cache/semchan/lancedb/<store-id>). - Config file in
~/.config/semchan/config.json(created bysemchan setup). - Multiple interfaces: CLI (
search), HTTP (serve), and MCP (mcp).
Output of semchan --help:
Semantic grep for your codebases
Usage:
semchan [flags]
semchan [command]
Available Commands:
completion Generate the autocompletion script for the specified shell
doctor Run diagnostics
help Help about any command
index Index or reindex the repository
list List stores in cache
mcp Run semantic-chan as an MCP server
search Semantic search
serve Start background HTTP server
setup Initialize cache and verify dependencies
Flags:
--batch-size int embedding batch size (override; lower to avoid llama-server batch limits)
-h, --help help for semchan
--model string override embedding model name
--store string override store id
Use "semchan [command] --help" for more information about a command.
Build the CLI (uses the bundled LanceDB library):
make semchan
./semchan --helpOptional: install to ~/.local/bin/semchan:
make installInitialize your local setup (creates a default config file if missing):
./semchan setupEdit ~/.config/semchan/config.json to point at your embeddings endpoint:
{
"embeddings": {
"base_url": "http://localhost:11434/v1",
"model": "jina-embeddings-v4-text-code",
"api_key": "",
"batch_size": 32,
"timeout_seconds": 60
}
}
Verify dependencies and your embedding configuration:
./semchan doctorIndex and search your repo:
./semchan index
./semchan search "dead letter queue"
# shorthand: running semchan without a subcommand performs a search
./semchan "dead letter queue"Run tests:
make testOptional: run the HTTP API:
./semchan serve --port 8095
curl -sS -X POST http://localhost:8095/search \
-H 'Content-Type: application/json' \
-d '{"query":"structured logging","max_results":20,"path":""}'Optional: run as an MCP server (stdio transport):
./semchan mcp --transport stdio