Advanced Programmatic Tool Calling (APTC) — agent-first, MCP-native.
[PRE-CALL] → [IN-CALL] → [POST-CALL]
Anthropic shipped tool calling. The community shipped a thousand MCP servers. Agents shipped one-shot calls that dump raw tool output back into context — embeddings, file bytes, paginated lists, the works.
toolcli closes the gap with three explicit phases — prepare,
call, shape — executed inside a Python sandbox. Bulk data stays
on disk; only a compact ASCII summary reaches the agent. Compression on
real workloads: 10×–100× vs raw tool dumps.
Run Python wrappers that call MCP tools, filter/process the output, and return only what matters. One CLI surface. Every agent (Claude Code, Codex, Cursor, Gemini CLI…) auto-detects the skill.
# 1. The CLI
npm i -g toolcli
# 2. The Python sandbox runtime (one-time)
curl -LsSf https://astral.sh/uv/install.sh | sh
# 3. The agent skill (any agent that supports the universal skill format)
npx skills add octaviusp/toolcliThen bring your MCPs (one-time):
# Inherit every MCP server you've already wired in Claude Code:
toolcli mcp import ~/.claude/.mcp.json
# Or add them one at a time:
toolcli mcp add github https://api.github.com/mcp \
--header "Authorization: Bearer $GITHUB_TOKEN"
toolcli mcp add gpt --env "OPENAI_API_KEY=\${OPENAI_API_KEY}" \
-- uv run /abs/path/gpt-services-mcp/server.pyVerify:
toolcli list # see capabilities
toolcli help # full agent guidetoolcli code "result = {'msg': 'hello from the sandbox', 'two_plus_two': 2+2}"OK
RESULT
────────────────────────────────────────────────────────────
(object)
msg: hello from the sandbox
two_plus_two: 4
The classic case: embed a folder, save the matrix, return only a summary.
Requires the gpt-services-mcp
server registered as gpt.
toolcli code --with numpy --cwd ./out <<'PY'
texts = ['cats', 'dogs', 'physics', 'chemistry']
resp = tools['gpt'].generate_embeddings(
inputs=texts, model='text-embedding-3-small', full=True,
)
import numpy as np
v = np.array([x['embedding'] for x in resp['items']])
np.save(cwd / 'vectors.npy', v)
result = {'count': len(texts), 'dim': v.shape[1],
'usage': resp.get('usage')}
PYOutput (≈ 250 bytes; the 4 × 1536-float matrix stays on disk):
OK
RESULT
────────────────────────────────────────────────────────────
(object)
count: 4
dim: 1536
usage:
prompt_tokens: 8
total_tokens: 8
ARTIFACTS
────────────────────────────────────────────────────────────
created /abs/out/vectors.npy
Inside any wrapper, three globals are injected:
| name | type | meaning |
|---|---|---|
tools |
namespace | tools['srv'].method(**kwargs) |
payload |
dict | parsed from --payload '<json>' |
cwd |
pathlib.Path |
working dir (fresh tempdir by default) |
tools['srv'].method(**kw) works for identifier-safe tool names. For
hyphenated names (most MCP servers), use the bracket-bracket form:
tools['srv']['hyphen-name'](**kw).
Return the final value via result = … or the last top-level expression.
Bulk data must stay inside the wrapper — save to cwd and return
only paths/metrics/summaries.
toolsis a magic injected namespace — exists ONLY insidetoolcli code/toolcli script execute. Never write a.pyfile and run it withpython—toolswon't be defined.- Each call is a fresh sandbox. Variables don't survive between calls. A multi-step pipeline = ONE wrapper.
- Discovery loop when you don't know a tool's shape:
toolcli list --live # all tools toolcli mcp inspect <srv>.<tool> # INPUT schema toolcli code "r=tools['x'].fn(...); result=list(r.keys())" # PROBE OUTPUT
toolcli code "<py>" | -f <file> | <<'PY' ... PY
[--payload <json>] [--with pkg]... [--cwd dir] [--timeout s] [--json]
toolcli mcp add <name> <url> [--header "K: V"]... # http
toolcli mcp add <name> [--env K=V]... -- <cmd> [args...] # stdio
toolcli mcp import <path> # bulk
toolcli mcp inspect <srv>.<tool> | mcp list | mcp remove <name>
toolcli list [--live] | list mcp | list scripts | list tools
toolcli script add <name> "<desc>" ("<py>"|-f <file>|--stdin) [--deps a,b]
toolcli script execute <name> [--payload <json>] [--cwd dir] [--json]
toolcli script show <name> | script list | script remove <name>
toolcli help # full agent reference
toolcli help <cmd> # detail for one command
Lives in ~/.toolcli/ (created with 0700 permissions):
.mcp.json— MCP servers (Claude Code-compatible shape; commit-safe if you use${VAR}placeholders for secrets, expanded at load time). File mode is0600.scripts.json— index of cached scripts.scripts/<name>.py— script bodies.
| Symptom | Cause / fix |
|---|---|
toolcli requires uv to run Python sandboxes |
Install uv: curl -LsSf https://astral.sh/uv/install.sh | sh |
tools is undefined inside a .py you ran with python |
The sandbox only injects tools inside toolcli code / script execute. Don't run wrappers with python directly. |
No module named 'numpy' (or any dep) |
Pass it via --with: toolcli code --with numpy .... For cached scripts, declare in --deps at registration time. |
Wrapper returns RESULT (object) <empty> |
You forgot to assign to result. Either set result = ... or end with a bare expression. |
| Image/video MCP call hangs at 120s | Default timeout is 120s. Override: --timeout 300. |
${VAR} not expanded in MCP config |
Make sure the env var is exported in the shell where you run toolcli code. Storage keeps the placeholder so the file is commit-safe. |
| HTTP MCP returns 401 | Pass auth via --header "Authorization: Bearer $TOKEN" (NOT --env). |
The single most expensive thing an agent can do is dump a raw tool
response back into context. toolcli makes the right thing easy:
write a tiny wrapper that filters and shapes data inside a sandbox,
return a one-line summary. The agent stays sharp, the cost stays low.
MIT © Octavio Pavon