Never lose your AI context when token limits hit.
DevFlow has two modes—use one or both:
| Use Case | Setup | How |
|---|---|---|
| Context recovery (Cursor, Copilot, Claude Code, any AI) | Zero | Run npx github:abhishek5878/devflow → paste into Claude.ai (copies to clipboard) |
| Proxy routing (Continue.dev, Roo Code, Cline) | Add API key, start proxy | Auto-failover across Claude, GPT-4, Gemini when one hits limit |
See INSTALL.md for step-by-step setup. Extension: Command Palette → DevFlow: Get Started.
Landing page: landing/index.html — deploy to Vercel, Netlify, or GitHub Pages. Preview: npm run landing:serve (http://localhost:3001).
OpenAI-compatible endpoint at http://localhost:8080/v1 that:
- Routes requests across multiple AI providers (Claude, GPT-4, Gemini)
- Tracks token usage with exact accuracy
- Silently fails over on HTTP 429 (rate limit) without interrupting the developer
# Install
npm install
# Configure (optional - copy .env.example to .env)
cp .env.example .env
# Add OPENAI_KEY, ANTHROPIC_KEY, etc.
# Run
npm run devPoint any OpenAI-compatible client at DevFlow:
{
"ai.endpoint": "http://localhost:8080/v1",
"ai.apiKey": "devflow-local"
}Works with: Continue.dev, Roo Code, Cline, and any tool accepting custom OpenAI endpoints. OpenClaw — DevFlow plugin adds devflow_context_snapshot for AI handoff from the agent.
# Terminal 1: Start with mock providers
DEVFLOW_MOCK_LIMITS=1 npm run dev
# Terminal 2: Run limit exhaustion test
npm run test:limitsYou'll see Claude "exhaust" after ~2-3 requests, then silent failover to GPT-4.
npm test # Build, context, extension
npm run test:all # Above + proxy failover + client (starts proxy automatically)
npm run test:userflow # End-to-end user flow (Context Snapshot + Proxy)For manual proxy test:
DEVFLOW_MOCK_LIMITS=1 npm run dev # terminal 1
npm run test:proxy # terminal 2npm run dev # terminal 1
npm run test:client # terminal 2 (requires OPENAI_KEY or ANTHROPIC_KEY)| Method | Path | Description |
|---|---|---|
| POST | /v1/chat/completions |
OpenAI-compatible chat completions (proxied) |
| GET | /v1/models |
List available models |
| GET | /v1/health |
Provider status & token usage |
- Claude (100k limit)
- GPT-4 (80k limit)
- Gemini (90k limit)
- Ollama (∞, local — run
ollama run llama3.2) - Amazon Q, Codeium (∞, when integrated)
At 90% capacity: warning logged. At 100%: automatic failover. Ollama used as free fallback if running.
If you run OpenClaw locally, DevFlow can route through it as another provider.
- Enable the OpenAI-compatible endpoint in OpenClaw (required):
Add this to ~/.openclaw/openclaw.json (or via the Control UI Config tab), then restart with openclaw gateway restart:
{
"gateway": {
"http": {
"endpoints": {
"chatCompletions": {
"enabled": true
}
}
}
}
}- Point DevFlow at OpenClaw (optional overrides, defaults shown):
# .env
DEVFLOW_OPENCLAW_URL=http://localhost:18789/v1
DEVFLOW_OPENCLAW_MODEL=openclaw-defaultWhen these are set and OpenClaw is running, DevFlow will treat it like any other provider and fall back to it after Claude/GPT/Gemini, before Ollama.
Recovery tool for closed systems (Cursor, GitHub Copilot). Generate context.md for seamless handoff to any AI.
npx github:abhishek5878/devflowDefault: Copies to clipboard. Paste into Claude.ai. Done.
Options: --no-copy, --skip-tsc, --target claude|cursor|continue (paste hint), --output path
Output: Git history, stack detection, project rules, "For the Next AI" handoff. One command. Zero friction.
From repo root (/Users/abhishekvyas/devflow):
npm run ext:setupOr step by step:
npm run build
cd devflow-vscode && npm install && npm run compileThen open the devflow-vscode folder in VS Code and press F5.
- Cmd+Shift+D (Mac) / Ctrl+Shift+D (Win/Linux) — Generate context.md
- Status bar — Proxy status and token usage (click for dashboard)
- Commands — Start Proxy, Stop Proxy, View Dashboard
See INSTALL.md for Quick Start, configuration, and troubleshooting.
Package the extension (requires Node 20+):
cd devflow-vscode && npm install && npm run package
# Creates devflow-0.1.0.vsixOn Node 18, use the extension from source with F5 (see Phase 3 above).
- Phase 4: Publish to VS Code Marketplace — see PUBLISH.md for steps