I build infrastructure for AI coding agents.
Currently working on AgentProxy — a transparent token compression proxy that sits between your agent and the Anthropic/OpenAI API. It compresses tool results before they hit the context window, so your agent solves more problems and costs less to run.
- +40% more problems solved on SWE-bench style tasks
- −25% token cost on real coding workloads
- Zero code changes — just prefix your command with
agentproxy run
pip install agentproxy
agentproxy run claude| AgentProxy — Token compression proxy for LLM agents. +40% task success, −25% cost. | cc-automode — Standalone re-implementation of Claude Code auto mode with Docker benchmark suite. |
| intro-to-agentic-security — An introduction to agentic AI security. | CloudEval-YAML — Benchmarking LLMs for cloud config generation. ☁️ |
