An MCP (Model Context Protocol) server that exposes Memtech's memory-ASIC engineering knowledge base to AI assistants. Connects an MCP-capable client (Claude Code, Claude Desktop, Cursor, etc.) to the Memtech database holding RTL, lab logs, DFM rules, and silicon design knowledge that has been ingested through Memtech's platform.
Status: v0.1.0 — early access. This release ships one tool (
search_memtech_kb). Four additional tools are coming in v0.2 (see Roadmap).
This server lets an AI assistant search Memtech's engineering corpus using natural language. Ask Claude "show me the HBM3 controller code that handles read latency timing" and the assistant calls this server, which embeds the query, searches the configured collection, and returns the top-matching chunks with their metadata. The assistant then grounds its answer in the retrieved chunks instead of hallucinating from generic training data.
The server is intentionally a thin shim: no LLM logic, no business rules, just retrieval. This is the layer of the Memtech platform that customers can audit, fork, and self-host. The reasoning, eval harness, and corpus management live in Memtech's platform; this MCP server connects to them.
This repository is for developers integrating Memtech into an AI workflow. There are two main audiences:
- Application developers building memory-IC engineering tools that should call Memtech as part of an LLM-driven workflow. You will run this server locally (or as a Docker container) and register it with your MCP-capable client. The Apache-2.0 license lets you fork, modify, and redistribute.
- Platform teams at organizations evaluating Memtech for a self-hosted deployment. You want to read the source, validate the architecture, and run integration tests against your own database deployment before recommending Memtech to your engineering teams.
If you are an end user — a memory-ASIC engineer who wants to ask Memtech questions through your AI chat — you do not run this server directly. Your organization's platform team will configure it on your behalf, or you will use Memtech's hosted endpoint when it goes GA. This repository is for the developers who set up the connection.
Before you can run this server, you need:
- Memtech-issued credentials. Memtech provisions everything you need to run the server: a hosted vector store (with your tenant collection populated and ready) and a matching AI embedding API key. Both are delivered together as part of your evaluation or self-hosted deployment package — contact support@memtech.ai to request access. Do not substitute a key from another account: the embedding model must match the one used to build the collection, or searches will return silently incorrect results.
- Python 3.11 or newer, or Docker if you prefer the containerized deployment.
A note on credential scope. The database API key Memtech issues you is a JWT scoped to read-only access on your collection only. It cannot list other collections, modify data, or see other tenants' chunks. This is enforced at the database layer, so the protection holds even if a customer extracts the credentials from
.env. You don't need to do anything to opt in — it's the default for all customer credentials.
These five steps take you from a fresh clone to a registered, working MCP server. Each step says what to type and what to expect to see.
git clone https://github.com/California-Memtech/mcp.git memtech-mcp
cd memtech-mcpuv is recommended — it creates the virtual environment and installs everything in one command:
uv syncIf you don't have uv, the equivalent with stock Python:
python -m venv .venv
# Windows (PowerShell): .\.venv\Scripts\Activate.ps1
# macOS/Linux: source .venv/bin/activate
pip install -e .Windows (PowerShell):
.\.venv\Scripts\Activate.ps1macOS/Linux:
source .venv/bin/activateHeads-up — wrong-venv gotcha. If your shell already had a different venv activated (from another project), the prompt will still say
(.venv)aftercd-ing here, butpythonresolves to that other venv. Always activate this project's venv explicitly. To verify you're in the right one:python -c "import sys; print(sys.prefix)"should print a path that ends in your project's.venv.
Copy the example file:
# Windows (PowerShell):
Copy-Item .env.example .env
# macOS/Linux:
cp .env.example .envOpen .env in your editor and paste the four values Memtech sent you:
MEMTECH_DB_URL=<from-memtech>
MEMTECH_DB_API_KEY=<from-memtech>
MEMTECH_EMBED_API_KEY=<from-memtech>
MEMTECH_COLLECTION=<from-memtech>
# Collection name follows the pattern <tenant>__<mem_type>python -m memtech_mcp.serverYou should see (in stderr):
[INFO] memtech-mcp.bootstrap: Starting Memtech MCP server v0.1.0
[INFO] memtech-mcp.bootstrap: Target collection: <your-collection>
[INFO] memtech_mcp.qdrant_client: Connected to database: collection=<your-collection>, points=N, dim=1536
[INFO] memtech-mcp.bootstrap: Server ready, waiting for MCP traffic on stdio
Then it hangs, waiting for an MCP client to connect over stdio. That's correct. Press Ctrl+C to exit. If you see anything else, jump to Troubleshooting.
The repo ships a one-shot setup script. From the project root:
Windows (PowerShell):
.\setup-mcp.ps1macOS/Linux:
bash setup-mcp.shThat's it. The script:
- Auto-detects the absolute path to your venv's Python.
- Removes any prior
memtech-kbregistration (so it's safe to re-run). - Runs
claude mcp add memtech-kb -s user -- <python> -m memtech_mcp.server. - Verifies with
claude mcp list.
The registration is persistent — -s user writes it to ~/.claude/mcp.json, so the server is available in every Claude Code session, in every directory, until you explicitly remove it. You only run this script once per machine.
To unregister later:
claude mcp remove memtech-kb -s userTo re-register (e.g. after moving the project): just re-run setup-mcp.ps1 / setup-mcp.sh.
Run the command the script would have run, replacing <PROJECT_PATH> with the absolute path to this project:
# Windows (PowerShell):
claude mcp add memtech-kb -s user -- "<PROJECT_PATH>\.venv\Scripts\python.exe" -m memtech_mcp.server# macOS/Linux:
claude mcp add memtech-kb -s user -- "<PROJECT_PATH>/.venv/bin/python" -m memtech_mcp.serverWhy the absolute path: it pins the interpreter to this project's venv, so the registration works regardless of which directory or venv your shell is in when Claude Code starts the server. The .env file is found automatically — it's resolved relative to the project root, not the caller's cwd.
For Claude Desktop, Cursor, GitHub Copilot, Gemini CLI, and other clients, see docs/CLIENT_SETUP.md. Each client has its own config file format — the underlying command is the same as what setup-mcp.ps1 / setup-mcp.sh build.
Open your MCP client's chat (the Claude Code side panel, Claude Desktop, etc.) and ask a memory-engineering question:
"Show me the controller code that handles read latency timing."
Claude will recognize the query as something search_memtech_kb can answer and call the tool automatically. You'll see:
- A tool-call banner showing the query Claude generated.
- A first-time approval prompt — approve it (or
/permissionsto approve standing). - A JSON response listing ranked chunks (
score,file_path,symbol,chunk_text). - A natural-language answer grounded in those chunks.
To confirm the tool is registered before chatting, ask: "What tools do you have available?" — search_memtech_kb should appear.
The server uses FastMCP for the MCP protocol layer. On a tool call, it embeds the query with the Memtech-provisioned embedding service, searches the configured collection in the Memtech database, and returns the top-K matching chunks with their metadata (file path, symbol, source type, classification). Credentials and the target collection name come from environment variables — no secrets in code, no defaults that might leak data across deployments. Full details in docs/ARCHITECTURE.md.
| Tool | Description | Status |
|---|---|---|
search_memtech_kb |
Semantic search over the configured Memtech collection. Returns ranked chunks with text, file path, symbol, and similarity score. | ✅ Available |
predict_yield |
Yield-rate prediction with failure-mechanism analysis. | 🚧 v0.2 |
analyze_root_cause |
Root-cause hypothesis ranking from a symptom description. | 🚧 v0.2 |
suggest_rtl_patch |
Patch suggestions for RTL bugs based on lab evidence. | 🚧 v0.2 |
generate_ate_plan |
ATE (automated test equipment) plan generation. | 🚧 v0.2 |
The v0.2 tools call reasoning endpoints rather than raw retrieval and require the Memtech platform's gateway to be in place. They are stubbed in memtech_mcp/tools/ with NotImplementedError for forward compatibility.
This is v0.1.0 — minimal, working, suitable for evaluation deployments and as the foundation for v0.2.
- ✅ Working
search_memtech_kbover a hosted vector-store + embedding-service stack - ✅ Unit tests with mocked dependencies; integration smoke test gated on credentials
- ✅ Apache-2.0 licensed
- ✅ Multi-stage Dockerfile for containerized deployments
- ⏳ Memtech platform gateway integration (v0.2)
- ⏳ The remaining four tools (v0.2)
- ⏳ Pre-built Docker images on a public registry (v0.2)
See docs/ROADMAP.md for the detailed plan.
The five most common failures, in roughly the order beginners hit them:
The dependencies aren't installed in the Python that's running. Two likely causes:
- You skipped
uv sync— re-run it from the project directory. - A different venv is activated — even though the prompt may say
(.venv), it's pointing at someone else's. Runpython -c "import sys; print(sys.prefix)"; the printed path should end in this project's.venv. If not,deactivate, then activate this project's venv:.\.venv\Scripts\Activate.ps1on Windows,source .venv/bin/activateon macOS/Linux.
Your .env is missing a value (or doesn't exist). The error names the exact variables. Copy .env.example to .env and fill in what Memtech sent you. Note: the server resolves .env from the project root, so the file must literally be at <project>/.env — not at your shell's current directory.
The collection value is missing the double-underscore separator. Memtech's collection names look like customerA__HBM3 or internal_test001__DDR2 — the tenant and the memory type joined by __. Check the deployment package Memtech sent you for the exact name and copy it verbatim.
The validator passed but the database doesn't have a collection by that name. Most often a typo (e.g. you wrote _test instead of _test001) or you stripped the __<mem_type> suffix. The error log line above the traceback shows the literal name the server tried to look up — compare it character-for-character to what Memtech sent. If you're sure they match and you still get 404, contact support.
Diagnose in this order:
- Run the server manually first:
python -m memtech_mcp.serverfrom the project root with the venv active. If it errors, fix that first — the MCP client can't run something that doesn't run standalone. - Check the registered command:
claude mcp listshould show the absolute path to your venv's Python. If it shows justpython, re-register using the absolute-path form from step 6. - Restart your MCP client (Claude Code, Claude Desktop, etc.) so it re-reads the registration.
- Check the client's MCP log file for the exact launch error. Path is client-specific — see
docs/CLIENT_SETUP.md.
You used claude mcp add-json from PowerShell — PowerShell strips the inner double quotes from the JSON arg before passing it to claude.exe, so the CLI receives malformed JSON. Use the simpler claude mcp add ... -- form shown in step 6 of the Quick Start; it has no JSON to mangle.
docs/ARCHITECTURE.md— Why the code is shaped this way; how it relates to the rest of the Memtech platformdocs/CLIENT_SETUP.md— Per-client setup (Claude Code, Claude Desktop, Cursor, Copilot, Gemini)docs/ROADMAP.md— v0.2, v1.0, and beyonddocs/CONTRIBUTING.md— Development workflow, testing, code style
Apache License 2.0 — see LICENSE.
Copyright © 2026 California Memtech and Contributors. All rights reserved.
The Apache-2.0 license is a deliberate choice. It allows external organizations to fork and self-host this MCP shim while permitting Memtech to keep the rest of the platform (gateway, eval harness, re-ranker, audit logging, corpus management) proprietary in separate repositories. The shim is the contract between Memtech and the LLM ecosystem; the platform's reasoning and operations layer is Memtech's accumulated engineering moat.