feat: add Render deployment blueprint#43
Conversation
Add render.yaml with two services: a public Docker-based Next.js frontend and a private Python LangGraph agent. Normalize the LANGGRAPH_DEPLOYMENT_URL to handle Render's bare host:port format, and make MCP server configuration opt-in via env var instead of hardcoding the excalidraw default. Co-Authored-By: Claude Opus 4.6 (1M context) <noreply@anthropic.com>
Self-Review FindingsVerdict: Block merge — P0=1, P1=1
Open Questions
|
Sliding-window rate limiter (20 req/min per IP) to prevent individual abuse of the public CopilotKit endpoint. In-memory with periodic cleanup to prevent unbounded Map growth. No new dependencies. Co-Authored-By: Claude Opus 4.6 (1M context) <noreply@anthropic.com>
- Switch agent from langgraph dev to production Docker image (langchain/langgraph-api) - Add health check endpoint (/ok) for agent private service - Add turbo.json to frontend buildFilter to prevent stale builds - Add Dockerfile.agent for production agent builds - Revert serverId to example_mcp_app for traceability
- LLM_MODEL env var in agent (defaults to gpt-5.4-2026-03-05) - RATE_LIMIT_WINDOW_MS and RATE_LIMIT_MAX env vars (defaults 60s/40 req) - README callout: strong models required for generative UI (GPT-5.4, Claude Opus 4.6, Gemini 3.1 Pro) Co-Authored-By: Claude Opus 4.6 (1M context) <noreply@anthropic.com>
GeneralJerel
left a comment
There was a problem hiding this comment.
Review
Overall this is well-scoped and production-oriented. A few issues to address before deploying:
Blocker
Missing docker/Dockerfile.app — render.yaml line 25 references dockerfilePath: docker/Dockerfile.app but this file is not included in the PR or the repo. The frontend service will fail to build on Render.
Verify Before Deploy
-
LANGSERVE_GRAPHSexport name —docker/Dockerfile.agentline 13 setsLANGSERVE_GRAPHS='{"sample_agent": "/deps/agent/main.py:graph"}', butmain.pyexportsagent, notgraph. Iflanggraph-apiexpects agraphattribute, this will fail at runtime. -
Agent health check
/ok—render.yamlline 8 configureshealthCheckPath: /ok. Doeslangchain/langgraph-apiserve this endpoint out of the box? If not, the agent will never pass health checks and Render will keep restarting it.
Minor
-
Rate limiter in serverless — The
setIntervalcleanup and in-memoryMapinroute.tswork fine for the Render Docker deployment (long-lived process), but are effectively a no-op if someone runs on Vercel/serverless (ephemeral process, lost on cold start). Worth a comment noting this. -
.env.example— Missing trailing newline (POSIX convention).
Positives
- MCP opt-in via env var instead of hardcoding excalidraw — good call
- URL normalization for Render's bare
host:portformat is clean buildFiltercorrectly scoped per service- Simple no-dependency rate limiter with cleanup is appropriate for a demo
In-memory rate limiting doesn't scale across multiple instances for high-traffic deployments. Disable by default via RATE_LIMIT_ENABLED env var so it doesn't silently misbehave at scale. Can be re-enabled for single-instance or low-traffic deployments.
PR #43 Review: Render Deployment BlueprintStatus: Approve with minor notesThe PR is well-structured across 5 commits that incrementally add Render deployment support. The earlier self-review findings (F1–F5) have all been addressed in subsequent commits. What's good
Issues to consider1. No
- key: LLM_MODEL
value: gpt-5.4-2026-03-052. Rate limiter env vars not in render.yaml either (minor) Same as above — 3. The interval is correctly guarded by 4. Next.js standalone output assumption
VerdictGood to merge. The deployment architecture is sound, previous review findings have been addressed, and the remaining items are minor configuration niceties. The |
Wire LLM_MODEL to the agent service and rate limiter env vars to the frontend service so operators don't need to configure them manually in the Render dashboard. Co-Authored-By: Claude Opus 4.6 (1M context) <noreply@anthropic.com>
| fi; \ | ||
| done | ||
|
|
||
| ENV LANGSERVE_GRAPHS='{"sample_agent": "/deps/agent/main.py:graph"}' |
There was a problem hiding this comment.
The langgraph-api base image expects the graph object to be importable, but there's no verification that it can actually resolve /deps/agent/main.py:graph at runtime. Worth a quick smoke test before deploying:
docker build -f docker/Dockerfile.agent . && docker run --rm <image> python -c "from main import graph; print(graph)"
Summary
render.yamlblueprint with two services: a public Next.js frontend (Docker) and a private LangGraph Python agentLANGGRAPH_DEPLOYMENT_URLto handle Render's barehost:portformat fromfromServiceMCP_SERVER_URLenv var (no hardcoded excalidraw default)Architecture
The agent uses
langgraph dev --no-browser --no-reloadwith in-memory storage, suitable for a demo deployment.Post-deploy steps
OPENAI_API_KEYsecret on the agent serviceLANGSMITH_API_KEYandMCP_SERVER_URLTest plan
pnpm --filter @repo/app buildsucceedsuv run langgraph dev --host 0.0.0.0 --port 8123 --no-browser --no-reload🤖 Generated with Claude Code