feat!: v4 Cloudflare Workers migration — 2-tool SDK architecture#40
feat!: v4 Cloudflare Workers migration — 2-tool SDK architecture#40AojdevStudio merged 11 commits intomainfrom
Conversation
BREAKING CHANGE: Replaces 6 legacy operation tools (drive, sheets, forms, docs, gmail, calendar) with 2 tools: search (SDK spec query) and execute (sandboxed JS with full googleapis SDK). Adds Cloudflare Workers entry point (worker.ts), WebStandardStreamableHTTPServerTransport, KV token storage, and fetch-based OAuth. Node stdio transport preserved. Co-Authored-By: AOJDevStudio
|
Note Reviews pausedIt looks like this branch is under active development. To avoid overwhelming you with review comments due to an influx of new commits, CodeRabbit has automatically paused this review. You can configure this behavior by changing the Use the following commands to manage reviews:
Use the checkboxes below for quick actions:
📝 WalkthroughWalkthroughThis pull request represents a major architectural migration from v3.3.0 to v4.0.0-alpha, transitioning the MCP server from a monolithic stdio-based model to a dual-transport architecture supporting both Node.js stdio and Cloudflare Workers HTTP endpoints. The changes introduce a simplified two-tool SDK (search and execute), implement Web Crypto-based token encryption for Workers deployment, refactor the server entrypoint to a minimal CLI dispatcher, and remove the entire GSD (Get Shit Done) agent framework infrastructure. Changes
Sequence DiagramsequenceDiagram
actor User
participant Client as Claude/HTTP Client
participant MCP as MCP Server<br/>(Stdio or Workers)
participant Auth as Auth Manager<br/>(OAuth/KV)
participant SDK as SDK Runtime<br/>(Sandbox + Ops)
participant GoogleAPI as Google APIs<br/>(REST)
User->>Client: Request Code Execution
Client->>MCP: POST /mcp (execute tool)
MCP->>Auth: Get Valid Access Token
Auth->>GoogleAPI: Refresh Token (if needed)
GoogleAPI-->>Auth: New Access Token
Auth-->>MCP: Valid Access Token
MCP->>SDK: createSDKRuntime(context)
SDK->>SDK: Load operation specs
SDK->>MCP: SDKRuntime ready
Client->>MCP: Call execute(code)
MCP->>SDK: Execute code in sandbox
SDK->>SDK: VM context with globals
SDK->>GoogleAPI: User code calls sdk.drive.search()
GoogleAPI-->>SDK: Results
SDK->>SDK: Capture logs + result
SDK-->>MCP: ExecuteResult{result, logs, error?}
MCP-->>Client: Tool result
Client->>MCP: Call search(service, operation)
MCP->>SDK: Look up in SDK_SPEC
SDK-->>MCP: OperationSpec details
MCP-->>Client: Operation metadata
Estimated code review effort🎯 4 (Complex) | ⏱️ ~45 minutes Possibly related PRs
Suggested labels
Poem
🚥 Pre-merge checks | ✅ 2 | ❌ 1❌ Failed checks (1 warning)
✅ Passed checks (2 passed)
✏️ Tip: You can configure your own custom pre-merge checks in the settings. ✨ Finishing Touches🧪 Generate unit tests (beta)
Thanks for using CodeRabbit! It's free for OSS, and your support helps us grow. If you like it, consider giving us a shout-out. Comment |
📊 Type Coverage ReportType Coverage: 98.74% This PR's TypeScript type coverage analysis is complete. |
Performance Comparison ReportOperation Performance
Memory Usage
Summary
Performance report generated by Claude Code |
There was a problem hiding this comment.
Actionable comments posted: 15
Note
Due to the large number of review comments, Critical, Major severity comments were prioritized as inline comments.
🟡 Minor comments (19)
justfile-169-170 (2)
169-170:⚠️ Potential issue | 🟡 Minor
cat CLAUDE.mdhard-fails if the file is missing — inconsistent withstart-plan.Line 174 already uses
2>/dev/nullto suppress missing-file errors instart-plan, butstart-ctxdoes not. IfCLAUDE.mdis absent, the recipe errors before Claude is even invoked.Note:
--append-system-promptis a valid, documented Claude CLI flag — "use when you want to add specific instructions while keeping Claude Code's default capabilities intact" and "is the safest option for most use cases."🔧 Proposed fix
-start-ctx model="opus": - {{cc}} --model {{model}} --append-system-prompt "$(cat CLAUDE.md)" +start-ctx model="opus": + {{cc}} --model {{model}} --append-system-prompt "$(cat CLAUDE.md 2>/dev/null)"🤖 Prompt for AI Agents
Verify each finding against the current code and only fix it if needed. In `@justfile` around lines 169 - 170, The start-ctx recipe uses $(cat CLAUDE.md) which hard-fails when CLAUDE.md is missing; make it behave like start-plan by suppressing missing-file errors—modify the start-ctx invocation (the start-ctx recipe and its --append-system-prompt argument) to read CLAUDE.md safely (e.g., redirect stderr or fall back to an empty string such as using cat CLAUDE.md 2>/dev/null or cat CLAUDE.md 2>/dev/null || true) so the recipe won’t error out if the file is absent while still passing the prompt when present.
169-170:⚠️ Potential issue | 🟡 Minor
cat CLAUDE.mdwill hard-fail if the file is missing — inconsistent withstart-plan.
start-planalready uses2>/dev/nullto suppress missing-file errors (line 174), butstart-ctxdoes not. IfCLAUDE.mdis absent (e.g., in a fresh environment or after a rename), the recipe errors before Claude is even invoked.🔧 Proposed fix
-start-ctx model="opus": - {{cc}} --model {{model}} --append-system-prompt "$(cat CLAUDE.md)" +start-ctx model="opus": + {{cc}} --model {{model}} --append-system-prompt "$(cat CLAUDE.md 2>/dev/null)"🤖 Prompt for AI Agents
Verify each finding against the current code and only fix it if needed. In `@justfile` around lines 169 - 170, The start-ctx justfile recipe will hard-fail if CLAUDE.md is missing; update the command that builds the --append-system-prompt so it tolerates a missing file the same way start-plan does. Locate the start-ctx recipe (the line containing {{cc}} --model {{model}} --append-system-prompt "$(cat CLAUDE.md)") and change the cat invocation to suppress a missing-file error (e.g., redirect stderr to /dev/null or fallback to an empty string) so the recipe does not fail when CLAUDE.md is absent.justfile-84-93 (2)
84-93:⚠️ Potential issue | 🟡 Minor
docker-compose(v1) is deprecated — usedocker compose(v2).
docker-composev1 was deprecated after June 2023 and has been removed from all future Docker Desktop versions. More critically for CI, Docker Compose v1 was removed from GitHub Actions Ubuntu & Windows runner images, meaning these recipes will break in any CI environment using current runner images.🔧 Proposed fix
-docker-up: - docker-compose up -d +docker-up: + docker compose up -d -docker-down: - docker-compose down +docker-down: + docker compose down -docker-logs: - docker-compose logs -f +docker-logs: + docker compose logs -fThe migration is straightforward: type
docker composeinstead ofdocker-compose.🤖 Prompt for AI Agents
Verify each finding against the current code and only fix it if needed. In `@justfile` around lines 84 - 93, The justfile targets docker-up, docker-down, and docker-logs use the deprecated `docker-compose` CLI; update each target (`docker-up`, `docker-down`, `docker-logs`) to call the modern `docker compose` subcommand (e.g., `docker compose up -d`, `docker compose down`, `docker compose logs -f`) so the recipes work with Docker Compose v2 and current CI runner images.
84-93:⚠️ Potential issue | 🟡 Minor
docker-compose(v1) is deprecated — usedocker compose(v2 plugin) instead.The standalone
docker-composebinary was deprecated in mid-2023 and removed from Docker Desktop and many hosted environments (e.g., GitHub Actions removed it in July 2024). The v2 plugin invoked asdocker composeis the current standard and is included in Docker Desktop or available as a package on Linux systems.🔧 Proposed fix
docker-up: - docker-compose up -d + docker compose up -d # Stop docker-compose services docker-down: - docker-compose down + docker compose down # View docker-compose logs (follow) docker-logs: - docker-compose logs -f + docker compose logs -f🤖 Prompt for AI Agents
Verify each finding against the current code and only fix it if needed. In `@justfile` around lines 84 - 93, The justfile targets docker-up, docker-down, and docker-logs use the deprecated docker-compose binary; update each target (docker-up, docker-down, docker-logs) to invoke the Docker v2 plugin by replacing `docker-compose` with `docker compose` while keeping the same subcommands and flags (e.g., `up -d`, `down`, `logs -f`) so behavior remains identical and compatible with modern Docker installations.justfile-70-75 (2)
70-75:⚠️ Potential issue | 🟡 Minor
authandservesilently fail on a clean checkout — missingbuilddependency.Both recipes invoke
node ./dist/index.jsbut don't declarebuildas a prerequisite. Afterjust cleanor on a fresh clone, the recipes will fail at runtime with no helpful error message.🔧 Proposed fix
-auth: - node ./dist/index.js auth +auth: build + node ./dist/index.js auth -serve: - node ./dist/index.js +serve: build + node ./dist/index.jsIf the intent is to keep them fast (skip rebuild when dist already exists), consider adding an
@prefix with a guard check or just document the prerequisite clearly in the comment.🤖 Prompt for AI Agents
Verify each finding against the current code and only fix it if needed. In `@justfile` around lines 70 - 75, The auth and serve recipes silently fail because they run node ./dist/index.js without depending on the build recipe; update the justfile so the auth and serve recipes declare build as a prerequisite (i.e., make auth and serve depend on the existing build recipe) or, if you want to avoid mandatory rebuilds, use an optional/@-prefixed invocation combined with a guard that checks ./dist exists before running node; reference the auth and serve recipe names and the build recipe when making this change.
70-75:⚠️ Potential issue | 🟡 Minor
authandservesilently fail on a clean checkout — missingbuilddependency.Both recipes invoke
node ./dist/index.jsbut don't declarebuildas a prerequisite. Afterjust cleanor on a fresh clone, the recipes fail at runtime with a confusing Node "module not found" error rather than a helpful build hint.🔧 Proposed fix
-auth: - node ./dist/index.js auth +auth: build + node ./dist/index.js auth -serve: - node ./dist/index.js +serve: build + node ./dist/index.js🤖 Prompt for AI Agents
Verify each finding against the current code and only fix it if needed. In `@justfile` around lines 70 - 75, The auth and serve justfile recipes run node ./dist/index.js but don't depend on build, so they fail on a clean checkout; update the justfile recipes named auth and serve to add build as a prerequisite (i.e., make them depend on the build target) so the distribution is compiled before running; also ensure the build target name matches the existing build recipe (e.g., build) and keep the original node ./dist/index.js command as the recipe body.src/sdk/sandbox-node.ts-46-58 (2)
46-58:⚠️ Potential issue | 🟡 Minor
globalThis: undefinedin the context descriptor does not actually hideglobalThisinside the sandbox.In Node.js
vm, the created context object ISglobalThisfor code running in that context. Setting the keyglobalThis: undefinedonly places a shadowingundefinedproperty on the sandbox object; code can still reach the real context viathisat the top level or via constructor prototype chains. Given the explicit "trusted-user tool" note in the file header this is low severity, but document the known limitation so future contributors don't rely on this as a security boundary.🤖 Prompt for AI Agents
Verify each finding against the current code and only fix it if needed. In `@src/sdk/sandbox-node.ts` around lines 46 - 58, The sandbox context currently sets globalThis: undefined in the context descriptor (see the listed globals like process, require, globalThis, etc.), but that does not actually hide the real globalThis inside Node's vm; top-level this and prototype chains can still access the context. Update src/sdk/sandbox-node.ts to add a clear comment next to the context descriptor (or above the sandbox creation) stating this limitation: that setting globalThis to undefined only creates a shadow property and is not a security boundary, and that callers should treat this as a trusted-user tool or use a different isolation approach (e.g., separate processes or node's isolate APIs) if true global hiding is required.
46-58:⚠️ Potential issue | 🟡 Minor
globalThis: undefinedin the context object does not actually hideglobalThisinside the sandbox.In Node.js
vm, the created context object isglobalThisfor code running in that context. Setting the propertyglobalThis: undefinedonly places a shadowing key on that object; code can still reach the real sandbox global viathisat the top level, or viaObject.getPrototypeOf([]).__proto__.constructor('return this')()style escapes. Given the explicit "personal, trusted-user tool" acknowledgement in the header comment this is low severity, but add a comment documenting this known limitation so future contributors don't treat this as a meaningful security boundary.🤖 Prompt for AI Agents
Verify each finding against the current code and only fix it if needed. In `@src/sdk/sandbox-node.ts` around lines 46 - 58, The explicit block that attempts to "block dangerous globals" (the object containing process, require, __dirname, globalThis, etc. in src/sdk/sandbox-node.ts) incorrectly implies that setting globalThis: undefined hides the sandbox global; add a clear inline comment above that block (or next to the globalThis entry) stating that in Node's vm the created context is the context's globalThis, so setting globalThis: undefined only creates a shadow property and does not prevent access to the real sandbox global (e.g., via top-level this or prototype/Function constructor escapes); note this is a known limitation and reference the file's trusted-user/tool acknowledgement so future contributors don't treat this as a security boundary.package.json-4-4 (2)
4-4:⚠️ Potential issue | 🟡 MinorStale
descriptionfield — still references the 6 removed legacy tools.The description still reads "operation-based progressive disclosure - direct API access to Drive, Sheets, Forms, Docs, Gmail, and Calendar", but this PR removes those 6 tools entirely and replaces them with
search/execute.✏️ Suggested update
- "description": "MCP server for Google Workspace with operation-based progressive disclosure - direct API access to Drive, Sheets, Forms, Docs, Gmail, and Calendar", + "description": "MCP server for Google Workspace — Code Mode SDK with search and execute tools for Drive, Sheets, Forms, Docs, Gmail, and Calendar",🤖 Prompt for AI Agents
Verify each finding against the current code and only fix it if needed. In `@package.json` at line 4, Update the stale package.json "description" value to reflect the new tooling: replace the legacy mention of "operation-based progressive disclosure - direct API access to Drive, Sheets, Forms, Docs, Gmail, and Calendar" with a concise description that references the new `search` and `execute` tools (or a more general statement about search/execute-based operations) so the package metadata matches the code changes; edit the "description" field in package.json accordingly.
4-4:⚠️ Potential issue | 🟡 MinorStale
descriptionstill references the 6 removed legacy tools.The description reads "operation-based progressive disclosure - direct API access to Drive, Sheets, Forms, Docs, Gmail, and Calendar", but this PR removes those 6 tools entirely in favour of
search/execute.✏️ Suggested update
- "description": "MCP server for Google Workspace with operation-based progressive disclosure - direct API access to Drive, Sheets, Forms, Docs, Gmail, and Calendar", + "description": "MCP server for Google Workspace — Code Mode SDK with search and execute tools (v4 Cloudflare Workers + Node.js stdio)",🤖 Prompt for AI Agents
Verify each finding against the current code and only fix it if needed. In `@package.json` at line 4, Update the package.json "description" value to remove references to the six legacy tools (Drive, Sheets, Forms, Docs, Gmail, Calendar) and instead describe the current functionality (MCP server for Google Workspace using the new operation model with "search" and "execute" actions). Edit the "description" field in package.json (the JSON key "description") so it succinctly reflects the PR changes and mentions search/execute rather than the removed tools.src/sdk/sandbox-node.ts-74-83 (2)
74-83:⚠️ Potential issue | 🟡 Minor
error.linefield is never populated despite being part of theExecuteResultinterface.
ExecuteResult.error.lineexists to report which line of agent code caused the error, but the catch block never sets it. ForSyntaxErrorand V8 runtime errors, the line number is available.✏️ Proposed fix — extract line from error
const err = error instanceof Error ? error : new Error(String(error)); + // SyntaxError and some runtime errors expose lineNumber or a parseable stack + const lineNumber = (err as { lineNumber?: number }).lineNumber + ?? extractLineFromStack(err.stack); return { result: undefined, logs, error: { message: err.message, + ...(lineNumber !== undefined ? { line: lineNumber } : {}), ...(err.stack !== undefined ? { stack: err.stack } : {}), }, };Add a small helper alongside
safeStringify:function extractLineFromStack(stack?: string): number | undefined { if (!stack) return undefined; // Match "agent-code.js:3:5" pattern emitted by vm const match = stack.match(/agent-code\.js:(\d+):/); return match ? parseInt(match[1], 10) : undefined; }🤖 Prompt for AI Agents
Verify each finding against the current code and only fix it if needed. In `@src/sdk/sandbox-node.ts` around lines 74 - 83, The catch block that builds the ExecuteResult error object never sets ExecuteResult.error.line, so line numbers from SyntaxError or V8 errors are lost; add a helper like extractLineFromStack(stack?: string): number | undefined that parses the VM stack frame (e.g., matching "agent-code.js:(\\d+):") and call it when constructing the error object in the catch inside sandbox-node.ts to populate the error.line field (use the caught error's stack, alongside existing message and stack properties, similar to safeStringify's placement); ensure extractLineFromStack is colocated with safeStringify and used for both SyntaxError and runtime errors.
74-83:⚠️ Potential issue | 🟡 Minor
error.lineis never populated despite being declared inExecuteResult.
SyntaxError(thrown before execution) and V8 runtime errors both include line information recoverable from the stack string emitted byvm(agent-code.js:N:C). Thelinefield is part of the public interface but is alwaysundefined.✏️ Proposed fix — extract line from stack or SyntaxError
const err = error instanceof Error ? error : new Error(String(error)); + const lineNumber = + (err as { lineNumber?: number }).lineNumber ?? + extractLineFromStack(err.stack); return { result: undefined, logs, error: { message: err.message, + ...(lineNumber !== undefined ? { line: lineNumber } : {}), ...(err.stack !== undefined ? { stack: err.stack } : {}), }, };Add alongside
safeStringify:function extractLineFromStack(stack?: string): number | undefined { if (!stack) return undefined; const match = stack.match(/agent-code\.js:(\d+):/); return match ? parseInt(match[1], 10) : undefined; }🤖 Prompt for AI Agents
Verify each finding against the current code and only fix it if needed. In `@src/sdk/sandbox-node.ts` around lines 74 - 83, The ExecuteResult.error.line is never populated; implement a helper (e.g., extractLineFromStack(stack?: string): number | undefined) that parses the stack for the "agent-code.js:(\\d+):" pattern and returns the line number, then call it in the catch block that builds the error object (using the caught error variable and err.stack) and include the returned line as the error.line property; ensure you also check for SyntaxError-specific line info if present before falling back to stack parsing.tsconfig.worker.json-19-19 (1)
19-19:⚠️ Potential issue | 🟡 MinorExclude
src/modules/**/*.tsfrom the Worker build.The modules (gmail, calendar, etc.) contain Node-only APIs (
Buffer,process.env,fs.readFile) that will fail in a Workers environment. Sincefactory.tsonly importstypes.tsfrom modules and the legacy operation-based tools are intentionally not registered for Workers, the entiresrc/modules/**/*.tspattern should be removed from theincludearray. Onlysrc/modules/types.tsis needed for type definitions.🤖 Prompt for AI Agents
Verify each finding against the current code and only fix it if needed. In `@tsconfig.worker.json` at line 19, The Worker tsconfig includes Node-only module sources via the "src/modules/**/*.ts" glob which pulls in files using Buffer/process/fs and breaks the Workers build; remove that glob from the "include" array and instead include only the module type file ("src/modules/types.ts") so runtime-only module implementations are excluded; verify that factory.ts only imports the types file (not module implementations) and run the Worker build to confirm no Node APIs are present.wrangler.toml-13-18 (1)
13-18:⚠️ Potential issue | 🟡 MinorMissing
GDRIVE_TOKEN_ENCRYPTION_KEYin the Workers secrets documentation.The secrets comment lists only
GDRIVE_CLIENT_IDandGDRIVE_CLIENT_SECRET, but the codebase requiresGDRIVE_TOKEN_ENCRYPTION_KEY(a 32-byte base64 key) for token encryption. This should be documented here so deployers know to set it viawrangler secret put GDRIVE_TOKEN_ENCRYPTION_KEY. Based on learnings,GDRIVE_TOKEN_ENCRYPTION_KEYis a required environment variable.Proposed fix
# Workers Secrets (set via wrangler secret put): # GDRIVE_CLIENT_ID — OAuth client ID from gcp-oauth.keys.json # GDRIVE_CLIENT_SECRET — OAuth client secret from gcp-oauth.keys.json +# GDRIVE_TOKEN_ENCRYPTION_KEY — 32-byte base64 key for token encryption +# Generate with: openssl rand -base64 32🤖 Prompt for AI Agents
Verify each finding against the current code and only fix it if needed. In `@wrangler.toml` around lines 13 - 18, Update the Workers secrets comment to include the required GDRIVE_TOKEN_ENCRYPTION_KEY secret: mention that the code expects a 32-byte base64 key named GDRIVE_TOKEN_ENCRYPTION_KEY and show how to set it via wrangler secret put (e.g., wrangler secret put GDRIVE_TOKEN_ENCRYPTION_KEY), alongside the existing GDRIVE_CLIENT_ID and GDRIVE_CLIENT_SECRET entries so deployers know this env var is mandatory for token encryption used by the code that reads GDRIVE_TOKEN_ENCRYPTION_KEY.src/server/transports/stdio.ts-90-97 (1)
90-97:⚠️ Potential issue | 🟡 Minor
JSON.parseand key-format validation lack structured error handling.If
gcp-oauth.keys.jsoncontains malformed JSON,JSON.parsethrows a genericSyntaxErrorthat will surface as an unhandled rejection with no user-friendly context. The rest ofrunStdioServeralso has no top-leveltry/catch, so any failure inauthManager.initialize(),createCacheManager(), orserver.connect()similarly propagates without a clear diagnostic message — in contrast with the explicitprocess.exit(1)pattern used for the validation checks above.Consider wrapping the initialization sequence in a
try/catchwith a contextual error log andprocess.exit(1)for consistency.Proposed fix for JSON.parse
- const keysContent = fs.readFileSync(oauthPath, 'utf-8'); - const keys = JSON.parse(keysContent); - const oauthKeys = keys.web ?? keys.installed; + let oauthKeys: Record<string, unknown>; + try { + const keysContent = fs.readFileSync(oauthPath, 'utf-8'); + const keys = JSON.parse(keysContent) as Record<string, unknown>; + oauthKeys = (keys.web ?? keys.installed) as Record<string, unknown>; + } catch (err) { + logger.error(`Failed to read or parse OAuth keys at: ${oauthPath}`, { + error: err instanceof Error ? err.message : err, + }); + process.exit(1); + }🤖 Prompt for AI Agents
Verify each finding against the current code and only fix it if needed. In `@src/server/transports/stdio.ts` around lines 90 - 97, The startup code that reads and parses gcp-oauth.keys.json and then calls authManager.initialize(), createCacheManager(), and server.connect() needs a top-level try/catch: wrap the sequence that reads oauthPath, calls JSON.parse(keysContent) to build oauthKeys, then authManager.initialize(), createCacheManager(), and server.connect() inside a try block and on catch call logger.error with a clear contextual message including the caught error, then call process.exit(1); ensure you still preserve the existing check for oauthKeys (web/installed) but handle JSON.parse failures and any initialization errors consistently by logging the error and exiting.worker.ts-74-80 (1)
74-80:⚠️ Potential issue | 🟡 MinorAuth failure response leaks internal error details.
detail: String(err)may expose KV error messages, stack traces, or token endpoint responses to the caller. For a security-sensitive auth path, consider returning a generic message.🛡️ Suggested fix
logger.error('Auth failed', err); - return new Response(JSON.stringify({ error: 'Authentication failed', detail: String(err) }), { + return new Response(JSON.stringify({ error: 'Authentication failed' }), { status: 401, headers: { 'Content-Type': 'application/json' }, });🤖 Prompt for AI Agents
Verify each finding against the current code and only fix it if needed. In `@worker.ts` around lines 74 - 80, The catch block currently logs the full error via logger.error('Auth failed', err) but returns a Response that includes String(err), risking leakage of internal details; keep the internal detailed log but change the HTTP response created in the catch (the Response(...) call) to return a generic error message (e.g., "Authentication failed") without including err or stack traces in the JSON body or headers, and ensure logger.error still records the full err for internal debugging while the client sees only the generic message and appropriate 401 status.worker.ts-56-62 (1)
56-62:⚠️ Potential issue | 🟡 MinorMCP Streamable HTTP spec also requires handling GET and DELETE on the endpoint.
In stateless mode, GET (SSE stream) and DELETE (session termination) may not be meaningful, but MCP clients can still send them. Returning a plain text 200/404 for these methods could confuse MCP clients. Consider returning a proper
405 Method Not Allowedwith anAllow: POSTheader for non-POST requests on/mcp.♻️ Suggestion
const url = new URL(request.url); - if (request.method !== 'POST' || (url.pathname !== '/' && url.pathname !== '/mcp')) { - return new Response('gdrive-mcp Worker v4.0.0-alpha\nPOST /mcp to connect.', { - status: url.pathname === '/' ? 200 : 404, - }); + if (url.pathname !== '/' && url.pathname !== '/mcp') { + return new Response('Not Found', { status: 404 }); + } + if (request.method !== 'POST') { + return new Response('Method Not Allowed', { + status: 405, + headers: { Allow: 'POST' }, + }); }🤖 Prompt for AI Agents
Verify each finding against the current code and only fix it if needed. In `@worker.ts` around lines 56 - 62, The code currently returns plain text 200/404 for any non-POST request; change the branch that checks request.method and url.pathname so that when url.pathname === '/mcp' and the method is not POST you return a 405 Method Not Allowed response and include an Allow: POST header (keep the existing 200 response for '/' and 404 for other paths). Update the conditional around request.method and url.pathname in worker.ts (the block that reads request.method !== 'POST' || (url.pathname !== '/' && url.pathname !== '/mcp')) to explicitly detect non-POST on '/mcp' and return Response('', { status: 405, headers: { 'Allow': 'POST' } }) instead of the current plain text reply.src/server/factory.ts-115-131 (1)
115-131:⚠️ Potential issue | 🟡 MinorMisleading error when service is invalid in the combined service+operation path.
If both
serviceandoperationare provided but the service is wrong, the error saysOperation '${service}.${operation}' not found, which implies the operation is the problem. Consider checking service validity first.♻️ Suggested fix
if (service && operation) { const svc = SDK_SPEC[service as keyof typeof SDK_SPEC]; + if (!svc) { + return { + content: [ + { + type: 'text' as const, + text: JSON.stringify({ + error: `Unknown service '${service}'`, + available: Object.keys(SDK_SPEC), + }), + }, + ], + }; + } - const op = svc?.[operation]; + const op = svc[operation]; if (!op) { return { content: [ { type: 'text' as const, - text: JSON.stringify({ error: `Operation '${service}.${operation}' not found` }), + text: JSON.stringify({ + error: `Operation '${operation}' not found in '${service}'`, + available: Object.keys(svc), + }), }, ], }; }🤖 Prompt for AI Agents
Verify each finding against the current code and only fix it if needed. In `@src/server/factory.ts` around lines 115 - 131, When both service and operation are provided the code currently looks up svc = SDK_SPEC[service] but only reports "Operation 'service.operation' not found" even when the service key is invalid; change the logic in the combined path to first check that svc (from SDK_SPEC) exists and return a clear error like "Service '<service>' not found" if missing, then look up op = svc[operation] and return the existing "Operation '<service>.<operation>' not found" error only when svc exists but op is missing; update the error JSON payloads produced by the block that uses SDK_SPEC, svc and op to reflect these two distinct checks.index.ts-22-71 (1)
22-71:⚠️ Potential issue | 🟡 MinorReplace
console.errorin command handlers to resolve lint warnings.Line 24, Line 45, Line 54, Line 63, and Line 70 use
console.error, which is currently flagged. Useprocess.stderr.write(or the shared logger) consistently.Proposed patch
case 'auth': authenticateAndSave().catch((err) => { - console.error('Auth flow failed:', err); + process.stderr.write( + `Auth flow failed: ${err instanceof Error ? err.message : String(err)}\n` + ); process.exit(1); }); break; @@ case 'rotate-key': import('./scripts/rotate-key.js') .then(({ rotateKey }) => rotateKey()) .catch((err) => { - console.error('Key rotation failed:', err); + process.stderr.write( + `Key rotation failed: ${err instanceof Error ? err.message : String(err)}\n` + ); process.exit(1); }); break; @@ case 'migrate-tokens': import('./scripts/migrate-tokens.js') .then(({ migrateTokens }) => migrateTokens()) .catch((err) => { - console.error('Migration failed:', err); + process.stderr.write( + `Migration failed: ${err instanceof Error ? err.message : String(err)}\n` + ); process.exit(1); }); break; @@ case 'verify-keys': import('./scripts/verify-keys.js') .then(({ verifyKeys }) => verifyKeys()) .catch((err) => { - console.error('Verification failed:', err); + process.stderr.write( + `Verification failed: ${err instanceof Error ? err.message : String(err)}\n` + ); process.exit(1); }); break; @@ default: runStdioServer().catch((err) => { - console.error('Server startup failed:', err); + process.stderr.write( + `Server startup failed: ${err instanceof Error ? err.message : String(err)}\n` + ); process.exit(1); }); }🤖 Prompt for AI Agents
Verify each finding against the current code and only fix it if needed. In `@index.ts` around lines 22 - 71, Replace the direct console.error calls in the command handlers with a consistent stderr/logger usage: when calling authenticateAndSave, the rotate-key/migrate-tokens/verify-keys dynamic imports (rotateKey, migrateTokens, verifyKeys), and the default runStdioServer handler, use process.stderr.write (or the project's shared logger) instead of console.error and ensure the error message includes the error text (err instanceof Error ? err.message : String(err)) and a trailing newline before calling process.exit with the same exit codes; similarly change the health-case catch which already uses process.stderr.write to match formatting conventions if needed.
🧹 Nitpick comments (16)
wrangler.toml (1)
6-11: Stale comment: KV namespace ID is already filled in.Line 8 says "Then replace TO_BE_FILLED with the returned id" but line 11 already contains a concrete ID. Update the comment to reflect that the namespace is provisioned, or remove the placeholder instruction.
🤖 Prompt for AI Agents
Verify each finding against the current code and only fix it if needed. In `@wrangler.toml` around lines 6 - 11, The comment above the [[kv_namespaces]] block is stale; update or remove it so it no longer instructs replacing TO_BE_FILLED when the id is already set. Specifically, edit the comment referencing the KV namespace for OAuth token storage/response caching (the [[kv_namespaces]] entry with binding "GDRIVE_KV" and id "a9f903f9b96e4e49917dac53fbeb4007") to either state the namespace is provisioned with the shown id or delete the placeholder instruction line..planning/codebase/CONVENTIONS.md (1)
1-95: Conventions document doesn't cover the new v4 SDK patterns.The document describes conventions for the legacy
src/modules/*operation pattern but doesn't mention the newsrc/sdk/*(rate limiter, executor, sandbox, runtime) orsrc/server/*(factory, bootstrap, transports) modules introduced in this PR. Consider adding convention notes for the new architecture — e.g., thewrap()pattern in the rate limiter, the sandbox execution model, and the transport adapter approach — so contributors working in the v4 surface have documented guidance.🤖 Prompt for AI Agents
Verify each finding against the current code and only fix it if needed. In @.planning/codebase/CONVENTIONS.md around lines 1 - 95, The conventions doc omits guidance for the new v4 SDK and server modules; update CONVENTIONS.md to add brief rules for the SDK/runtime surface: describe the wrap() pattern used by the rate limiter (e.g., wrap(fn) middleware), naming/placement conventions for sdk/executor/sandbox/runtime modules and their exported types (Executor, Sandbox, Runtime), guidance for the transport adapter pattern used by server factories/transports (e.g., TransportAdapter, createTransport), and recommended error/logging and typing patterns for async execution and rate-limited calls so contributors to the v4 surface follow consistent patterns.scripts/verify-keys.ts (2)
57-61: Fragile direct-invocation guard.
import.meta.url === \file://${process.argv[1]}`can fail whenprocess.argv[1]is a relative path or on Windows (backslash vs forward-slash mismatch). A more robust approach usesurl.fileURLToPathornode:path` to normalize both sides.Proposed fix
+import { fileURLToPath } from 'node:url'; + -if (import.meta.url === `file://${process.argv[1]}`) { +if (process.argv[1] && fileURLToPath(import.meta.url) === path.resolve(process.argv[1])) { verifyKeys().catch((err) => { console.error('Fatal error:', err); process.exit(1); }); }Note:
pathwill need to be imported fromnode:path.🤖 Prompt for AI Agents
Verify each finding against the current code and only fix it if needed. In `@scripts/verify-keys.ts` around lines 57 - 61, The direct-invocation guard using import.meta.url === `file://${process.argv[1]}` is brittle; update the check to normalize both sides before comparing: convert import.meta.url to a file path with url.fileURLToPath(import.meta.url) and resolve/process the CLI arg with path.resolve(process.argv[1]) (or alternatively convert the CLI path to a file URL with url.pathToFileURL and compare URLs). Modify the invocation guard around verifyKeys() to use these normalized comparisons and import the required helpers from node:url and node:path.
48-54: Redundant logger creation in catch block.
loggeris already in scope and guaranteed to be initialized (it's the first statement intry; ifcreateLogger()itself throws, execution goes to the outer.catchon line 58, not this catch block). Creatinglogger2is unnecessary and would produce a second logger instance with its own file handles.Proposed fix
} catch (error) { - const logger2 = createLogger(); - logger2.error('❌ Verification failed', { + logger.error('❌ Verification failed', { error: error instanceof Error ? error.message : error, }); process.exit(1);🤖 Prompt for AI Agents
Verify each finding against the current code and only fix it if needed. In `@scripts/verify-keys.ts` around lines 48 - 54, In the catch block remove the redundant createLogger() call and use the already-in-scope logger variable instead (replace logger2 with logger), so call logger.error('❌ Verification failed', { error: error instanceof Error ? error.message : error }); this avoids creating a second logger instance and unnecessary file handles while preserving the existing error payload; make sure the existing logger (created earlier via createLogger()) remains in scope for that catch block.src/sdk/rate-limiter.ts (2)
25-32:setTimeoutinfinallycreates uncleared timers that can leak.Each completed call schedules a
setTimeoutthat is never tracked or cleared. In long-running processes with high throughput, this means thousands of pending timers. If the rate limiter is ever discarded (e.g., during hot-reload or tests), those timers keep firing on a stale instance and prevent garbage collection.Consider storing timer references and adding a
destroy()/dispose()method, or at minimum useunref()so timers don't keep the process alive.🤖 Prompt for AI Agents
Verify each finding against the current code and only fix it if needed. In `@src/sdk/rate-limiter.ts` around lines 25 - 32, The finally block currently schedules a setTimeout(() => this.release(service), this.windowMs) that is never tracked or cleared; modify the rate-limiter to store each timer handle (e.g., in a Map keyed by service or a Set) when scheduling release so you can clear timers later, call timer.unref() when creating them to avoid keeping the process alive, and add a public destroy()/dispose() method that iterates stored timers to clearTimeout and remove references; update the wrapper returned by the function that uses this.acquire and this.release to push the timer handle into the store so it can be managed and GC'ed properly (refer to acquire, release, windowMs, and add destroy/dispose).
1-5: Doc comment mislabels the algorithm as "token bucket."The implementation is a concurrency limiter with a per-slot cooldown (slots are released after
windowMspost-completion). A token bucket refills tokens at a steady rate regardless of consumption. Consider calling this a "sliding-window concurrency limiter" or "leaky bucket" to avoid confusion.🤖 Prompt for AI Agents
Verify each finding against the current code and only fix it if needed. In `@src/sdk/rate-limiter.ts` around lines 1 - 5, Update the top doc comment to stop calling the implementation a "token bucket" and instead describe it accurately as a "sliding-window concurrency limiter" (or "leaky bucket") that limits concurrent slots and releases each slot only after windowMs following completion; mention it prevents rapid chained execute() calls and that the default is 10 requests per second per service, and keep references to windowMs and execute() so readers understand the actual per-slot cooldown behavior..planning/codebase/CONCERNS.md (2)
7-11: Tech debt entries reference structures being replaced in this PR.Lines 7–11 discuss dual
src/modules/*vssrc/drive/handler stacks and Lines 13–17 discuss the monolithicindex.ts. Since this PR replaces the six legacy tools with a 2-tool SDK and refactorsindex.tsinto a CLI dispatcher +src/server/*, these concerns are at least partially resolved. Consider updating them to reflect the v4 state, or explicitly label them as "pre-v4 baseline" so they aren't confused with active tech debt.🤖 Prompt for AI Agents
Verify each finding against the current code and only fix it if needed. In @.planning/codebase/CONCERNS.md around lines 7 - 11, The CONCERNS.md entry about dual implementation paths is outdated after this PR's migration to the 2-tool SDK and refactor of index.ts into a CLI dispatcher with src/server/*; update the paragraph referencing index.ts, src/modules/* and the legacy handlers (src/drive/, src/sheets/, src/forms/, src/docs/) to either (a) mark it explicitly as "pre-v4 baseline" or (b) rewrite it to describe the current state (retired handler stacks or the single source-of-truth abstraction used by the new SDK and CLI dispatcher); ensure the note clearly references the new artifacts (2-tool SDK, CLI dispatcher, src/server/*) so future readers know the concern was addressed by the migration.
96-98: Concern is already addressed by this PR.This PR introduces an HTTP service boundary via Cloudflare Workers (
worker.ts) and Streamable HTTP transport, yet this concern claims "Current architecture is stdio MCP only." If this doc is meant to reflect the post-v4 state, this entry should be removed or rewritten to describe remaining gaps (e.g., no inbound webhook handler for push notifications). If it documents the pre-v4 baseline, that should be made explicit.🤖 Prompt for AI Agents
Verify each finding against the current code and only fix it if needed. In @.planning/codebase/CONCERNS.md around lines 96 - 98, Update the CONCERNS.md entry about "No HTTP service boundary for webhook-driven workflows" to reflect the PR's changes: either remove the bullet entirely or rewrite it to note that an HTTP boundary has been added via Cloudflare Workers (worker.ts) and Streamable HTTP transport, and, if retaining a concern, specify the precise remaining gap (for example, "no inbound webhook handler for push notifications"). Also clarify whether the document is describing the pre-v4 baseline or the post-v4 state so readers know which version the concerns apply to.src/server/transports/stdio.ts (1)
122-124: Heartbeat interval leaks on shutdown.The
setIntervalreturn value is discarded, so there's no way to clear it during graceful shutdown. While this is tolerable for a long-lived stdio process, it can cause issues in tests or if a shutdown handler is added later.Proposed fix
- setInterval(() => { + const heartbeat = setInterval(() => { logger.info('Server heartbeat', { uptime: process.uptime() }); }, 30000); + heartbeat.unref(); // don't keep process alive just for heartbeat🤖 Prompt for AI Agents
Verify each finding against the current code and only fix it if needed. In `@src/server/transports/stdio.ts` around lines 122 - 124, The heartbeat setInterval call discards its timer id so it can't be cleared on shutdown; store the return value (e.g., const heartbeatInterval = setInterval(...)) near where logger.info('Server heartbeat', ...) is called and ensure you clear it during graceful shutdown by calling clearInterval(heartbeatInterval) from the existing shutdown/cleanup path (or add a process.on('exit'/'SIGINT' handler) that calls clearInterval), so the interval doesn't leak in tests or when shutting down..planning/codebase/TESTING.md (1)
34-38: Add a language specifier to the fenced code block.The directory structure block has no language identifier, which triggers a markdownlint MD040 warning. Use
```textor```plaintextfor non-code structure blocks.🤖 Prompt for AI Agents
Verify each finding against the current code and only fix it if needed. In @.planning/codebase/TESTING.md around lines 34 - 38, Update the fenced code block in TESTING.md that lists directory paths (the block containing lines like src/__tests__/<domain>/*.test.ts, src/modules/<service>/__tests__/*.test.ts, tests/e2e/*.test.ts) to include a language specifier for plain text (e.g., use ```text or ```plaintext) so markdownlint MD040 is satisfied; ensure only the opening fence is changed to include the specifier and keep the block contents unchanged.scripts/rotate-key.ts (2)
64-65: Unnecessary logger re-creation in catch block.
loggerfrom Line 12 is still in scope inside thecatchblock. Creatinglogger2is redundant.♻️ Proposed fix
} catch (error) { - const logger2 = createLogger(); - logger2.error('❌ Key rotation failed', { + logger.error('❌ Key rotation failed', { error: error instanceof Error ? error.message : error, });🤖 Prompt for AI Agents
Verify each finding against the current code and only fix it if needed. In `@scripts/rotate-key.ts` around lines 64 - 65, The catch block unnecessarily recreates a logger instance (logger2) even though the original logger variable (logger) created by createLogger() is still in scope; remove the logger2 creation and use the existing logger inside the catch handler (replace any logger2 references with logger) to avoid redundant instantiation and keep logging consistent.
29-32: Redundant conditional — both branches produce the same string.When
newVersionNum === 2, the result is'GDRIVE_TOKEN_ENCRYPTION_KEY_V2', which is identical to the template literal\GDRIVE_TOKEN_ENCRYPTION_KEY_V${newVersionNum}`whennewVersionNumis2`. The special case is a no-op.♻️ Simplify
- const newKeyEnv = - newVersionNum === 2 - ? 'GDRIVE_TOKEN_ENCRYPTION_KEY_V2' - : `GDRIVE_TOKEN_ENCRYPTION_KEY_V${newVersionNum}`; + const newKeyEnv = `GDRIVE_TOKEN_ENCRYPTION_KEY_V${newVersionNum}`;🤖 Prompt for AI Agents
Verify each finding against the current code and only fix it if needed. In `@scripts/rotate-key.ts` around lines 29 - 32, The conditional assigning newKeyEnv is redundant because both branches produce the same string; replace the ternary in rotate-key.ts with a single template literal assignment using newVersionNum (e.g., set newKeyEnv = `GDRIVE_TOKEN_ENCRYPTION_KEY_V${newVersionNum}`) and remove the special-case branch so only the template is used; ensure references to newKeyEnv and newVersionNum remain unchanged.src/server/factory.ts (1)
172-175: Fresh Google API clients and SDK runtime created on everyexecute()call.
buildContext()re-instantiatesgoogle.drive(...),google.sheets(...), etc., on every invocation. These are lightweight factory calls, but since theauthdependency doesn't change between calls, the context (and thus the Google clients) could be built once per server lifetime and reused. This would also be the natural place to hoist theRateLimiter(see the comment onsrc/sdk/runtime.ts).🤖 Prompt for AI Agents
Verify each finding against the current code and only fix it if needed. In `@src/server/factory.ts` around lines 172 - 175, The context and SDK runtime are being rebuilt on every call because buildContext() and createSDKRuntime(context) are invoked inside the execute flow; instead, instantiate the context (which creates google.drive(), google.sheets(), etc.) and the SDK runtime once at server startup and reuse them for subsequent sandbox.execute calls. Move the buildContext() and createSDKRuntime(context) calls out of the per-request path (so sandbox.execute(code, { sdk }) receives the reused sdk) and also hoist the RateLimiter into the SDK runtime creation so it is shared across executions rather than re-created on each execute().src/sdk/runtime.ts (1)
16-222: Consider a helper to reduce the 47× boilerplate.Every operation follows the identical pattern:
limiter.wrap(service, async (opts) => { const { fn } = await import(module); return fn(opts as ..., context); }). A small factory helper would eliminate ~200 lines of near-identical code and reduce the risk of copy-paste errors when adding new operations.♻️ Example helper sketch
function op<T extends (...args: any[]) => any>( service: string, modulePath: string, exportName: string, ) { return limiter.wrap(service, async (opts: unknown) => { const mod = await import(modulePath); return mod[exportName](opts as Parameters<T>[0], context); }); } // Usage: drive: { search: op<typeof import('../modules/drive/index.js')['search']>('drive', '../modules/drive/index.js', 'search'), // ... }The exact shape depends on your module export signatures, but the key insight is that the wrapping logic is identical for all 47 operations.
🤖 Prompt for AI Agents
Verify each finding against the current code and only fix it if needed. In `@src/sdk/runtime.ts` around lines 16 - 222, The file repeats the same limiter.wrap boilerplate for every method (e.g., drive.search, sheets.readSheet, gmail.sendMessage, calendar.createEvent) — introduce a small factory helper (e.g., op or makeOp) that takes service, modulePath (like '../modules/drive/index.js') and exportName (like 'search') and returns limiter.wrap(service, async (opts: unknown) => { const mod = await import(modulePath); return mod[exportName](opts as Parameters<typeof mod[exportName]>[0], context); }); Replace each repeated block (instances of limiter.wrap(...) that import a module then call a named export such as search, createFile, listSheets, readForm, createDocument, listMessages, getEvent, etc.) with a call to that helper to remove the duplication and keep the same typing and context handling.src/sdk/types.ts (1)
32-91:SDKRuntimeis effectively untyped due to pervasiveunknown.The interface advertises typed methods, but
unknown -> Promise<unknown>removes compile-time validation for all operations.Typed-signature pattern (apply across services)
+import type * as DriveOps from '../modules/drive/index.js'; @@ export interface SDKRuntime { drive: { - search(options: unknown): Promise<unknown>; - enhancedSearch(options: unknown): Promise<unknown>; + search( + options: Parameters<typeof DriveOps.search>[0] + ): ReturnType<typeof DriveOps.search>; + enhancedSearch( + options: Parameters<typeof DriveOps.enhancedSearch>[0] + ): ReturnType<typeof DriveOps.enhancedSearch>; // ...same pattern for remaining methods };As per coding guidelines: "
**/*.ts: Review the TypeScript code for: Type safety and correctness."🤖 Prompt for AI Agents
Verify each finding against the current code and only fix it if needed. In `@src/sdk/types.ts` around lines 32 - 91, The SDKRuntime interface is currently untyped because every method uses unknown; replace these with concrete option/response types or generics so callers get compile-time safety—define per-service types (e.g., DriveSearchOptions, DriveSearchResult, SheetsUpdateCellsOptions, FormsCreateFormOptions, GmailMessage, CalendarEvent, etc.) or make methods generic (e.g., search<TOptions, TResult>(options: TOptions): Promise<TResult>) and update each method signature in SDKRuntime (drive.*, sheets.*, forms.*, docs.*, gmail.*, calendar.*) to use those types/generics so the interface provides meaningful type-checking.src/auth/workers-auth.ts (1)
96-101: Add a single-flight guard to prevent refresh storms.When many requests arrive near expiry, each request can call the token endpoint concurrently. A shared in-flight promise avoids bursts.
Proposed patch
const TOKEN_ENDPOINT = 'https://oauth2.googleapis.com/token'; const KV_TOKEN_KEY = 'gdrive:oauth:tokens'; +let refreshInFlight: Promise<{ accessToken: string; expiryDate: number }> | null = null; @@ - const { accessToken, expiryDate } = await refreshAccessToken( - tokens.refresh_token, - clientId, - clientSecret - ); + if (!refreshInFlight) { + refreshInFlight = refreshAccessToken(tokens.refresh_token, clientId, clientSecret).finally(() => { + refreshInFlight = null; + }); + } + const { accessToken, expiryDate } = await refreshInFlight;🤖 Prompt for AI Agents
Verify each finding against the current code and only fix it if needed. In `@src/auth/workers-auth.ts` around lines 96 - 101, Multiple concurrent requests can trigger many simultaneous refreshAccessToken calls for the same refresh token; add a single-flight guard by introducing a module-level Map (e.g., inFlightRefreshes) keyed by tokens.refresh_token (or clientId+refresh_token) that stores the in-progress Promise, modify the code path that calls refreshAccessToken (the block that currently awaits refreshAccessToken(tokens.refresh_token, clientId, clientSecret)) to first check the map and return the existing Promise if present, otherwise set the Promise into the map, await it, and finally delete the key in both resolve and catch paths so subsequent refreshes can proceed; ensure you reference the same symbols refreshAccessToken and tokens.refresh_token when implementing the guard.
ℹ️ Review info
Configuration used: Path: .coderabbit.yaml
Review profile: CHILL
Plan: Pro
⛔ Files ignored due to path filters (1)
package-lock.jsonis excluded by!**/package-lock.json
📒 Files selected for processing (33)
.agents/skills/writing-plans/SKILL.md.env.example.planning/codebase/ARCHITECTURE.md.planning/codebase/CONCERNS.md.planning/codebase/CONVENTIONS.md.planning/codebase/INTEGRATIONS.md.planning/codebase/STACK.md.planning/codebase/STRUCTURE.md.planning/codebase/TESTING.mddocs/brainstorms/2026-02-25-code-mode-streamable-http-brainstorm.mddocs/planning/streamable-http-cloudflare-migration.mddocs/plans/2026-02-25-feat-code-mode-cloudflare-workers-migration-plan.mddocs/plans/MIGRATION_PROGRESS.mdindex.tsjustfilepackage.jsonscripts/rotate-key.tsscripts/verify-keys.tsskills-lock.jsonsrc/auth/workers-auth.tssrc/sdk/executor.tssrc/sdk/rate-limiter.tssrc/sdk/runtime.tssrc/sdk/sandbox-node.tssrc/sdk/spec.tssrc/sdk/types.tssrc/server/bootstrap.tssrc/server/factory.tssrc/server/transports/stdio.tssrc/storage/kv-store.tstsconfig.worker.jsonworker.tswrangler.toml
| # GDrive MCP Server - Command Launchpad | ||
| set dotenv-load := true | ||
|
|
||
| cc := "claude --dangerously-skip-permissions" |
There was a problem hiding this comment.
--dangerously-skip-permissions is hardcoded into every Claude invocation.
Baking this flag into the shared cc variable means any developer (or CI job) running just start, just start-ctx, or just start-plan silently bypasses Claude's permission system with no opt-out. Consider externalizing it as an optional variable so contributors can choose their own safety posture:
⚠️ Proposed safer default
-cc := "claude --dangerously-skip-permissions"
+skip_perms := env_var_or_default("CLAUDE_SKIP_PERMISSIONS", "")
+cc := "claude" + if skip_perms != "" { " --dangerously-skip-permissions" } else { "" }At minimum, add a comment documenting what the flag disables and why it's intentional.
🤖 Prompt for AI Agents
Verify each finding against the current code and only fix it if needed.
In `@justfile` at line 4, The shared variable cc currently hardcodes the dangerous
flag (--dangerously-skip-permissions) which causes every invocation of cc (used
by just start / just start-ctx / just start-plan) to bypass Claude's permission
system; change the cc definition so the flag is not included by default and make
it configurable (for example, append an optional variable like CC_FLAGS or
CC_ALLOW_DANGEROUS that can be set via environment or justfile variable to
include --dangerously-skip-permissions when explicitly requested), and add a
clear comment next to the cc declaration explaining what the flag disables and
why keeping it out of the default is intentional.
--dangerously-skip-permissions is hardcoded into every Claude invocation.
Baking this flag into the shared cc variable means any developer (or CI job) running just start, just start-ctx, or just start-plan silently bypasses Claude's permission system with no opt-out. Consider externalizing it as an optional variable so contributors can choose their own safety posture:
⚠️ Proposed safer default
-cc := "claude --dangerously-skip-permissions"
+skip_perms := env_var_or_default("CLAUDE_SKIP_PERMISSIONS", "")
+cc := "claude" + if skip_perms != "" { " --dangerously-skip-permissions" } else { "" }Or at minimum, document clearly in the recipe comment that the flag is intentional and what it disables.
🤖 Prompt for AI Agents
Verify each finding against the current code and only fix it if needed.
In `@justfile` at line 4, The shared cc variable currently hardcodes the dangerous
flag; change cc to build its command from a separate optional variable (e.g.,
CC_FLAGS or CC_DANGEROUS_FLAG) that defaults to empty and only includes
"--dangerously-skip-permissions" when explicitly set, update any recipes that
use cc (start, start-ctx, start-plan) to inherit that behavior, and add a
comment above the variable explaining the security implications and how to opt
into the flag.
| const raw = await kv.get(KV_TOKEN_KEY); | ||
| if (!raw) { | ||
| throw new Error( | ||
| 'No OAuth tokens found in KV. Run the auth flow (node ./dist/index.js auth) and upload tokens to KV.' | ||
| ); | ||
| } | ||
|
|
||
| const tokens = JSON.parse(raw) as WorkersTokenData; | ||
| const needsRefresh = Date.now() >= tokens.expiry_date - preemptiveMs; | ||
|
|
||
| if (!needsRefresh) { | ||
| return tokens.access_token; | ||
| } | ||
|
|
||
| // Refresh and persist back to KV | ||
| const { accessToken, expiryDate } = await refreshAccessToken( | ||
| tokens.refresh_token, | ||
| clientId, | ||
| clientSecret | ||
| ); | ||
|
|
||
| const updated: WorkersTokenData = { | ||
| ...tokens, | ||
| access_token: accessToken, | ||
| expiry_date: expiryDate, | ||
| }; | ||
|
|
||
| await kv.put(KV_TOKEN_KEY, JSON.stringify(updated), { | ||
| // Keep KV entry alive for 30 days; the token itself will be refreshed before expiry | ||
| expirationTtl: 30 * 24 * 60 * 60, | ||
| }); | ||
|
|
There was a problem hiding this comment.
Persist OAuth tokens encrypted in KV, not raw JSON.
Line 89 parses raw KV JSON directly and Line 109 writes updated token JSON directly. That stores access/refresh tokens in plaintext.
Proposed patch
+import { decryptToken, encryptToken } from '../storage/kv-store.js';
@@
export async function getValidAccessToken(
kv: KVNamespace,
clientId: string,
clientSecret: string,
+ encryptionKeyBase64: string,
preemptiveMs = 5 * 60 * 1000
): Promise<string> {
- const raw = await kv.get(KV_TOKEN_KEY);
- if (!raw) {
+ const encrypted = await kv.get(KV_TOKEN_KEY);
+ if (!encrypted) {
throw new Error(
'No OAuth tokens found in KV. Run the auth flow (node ./dist/index.js auth) and upload tokens to KV.'
);
}
- const tokens = JSON.parse(raw) as WorkersTokenData;
+ const decrypted = await decryptToken(encrypted, encryptionKeyBase64);
+ const tokens = JSON.parse(decrypted) as WorkersTokenData;
@@
- await kv.put(KV_TOKEN_KEY, JSON.stringify(updated), {
+ const encryptedUpdated = await encryptToken(JSON.stringify(updated), encryptionKeyBase64);
+ await kv.put(KV_TOKEN_KEY, encryptedUpdated, {
// Keep KV entry alive for 30 days; the token itself will be refreshed before expiry
expirationTtl: 30 * 24 * 60 * 60,
});Based on learnings: "Require GDRIVE_TOKEN_ENCRYPTION_KEY environment variable (32-byte base64 key). Generate with: openssl rand -base64 32."
🤖 Prompt for AI Agents
Verify each finding against the current code and only fix it if needed.
In `@src/auth/workers-auth.ts` around lines 82 - 113, The code reads/writes
plaintext token JSON to KV (KV_TOKEN_KEY) — require a
GDRIVE_TOKEN_ENCRYPTION_KEY env var (32-byte base64) and use it to AES-GCM
encrypt token payloads before kv.put and decrypt after kv.get; decode/import the
base64 key, validate length, decrypt the retrieved value and then JSON.parse
into WorkersTokenData (where raw is currently parsed), and when writing the
updated object (the updated variable) serialize then encrypt and store a
structured blob (iv + ciphertext, base64) instead of raw JSON; keep
refreshAccessToken usage unchanged but ensure any thrown crypto errors surface
with clear messages about missing/invalid GDRIVE_TOKEN_ENCRYPTION_KEY.
| private release(service: string): void { | ||
| const count = this.counters.get(service) ?? 0; | ||
| this.counters.set(service, Math.max(0, count - 1)); | ||
| const queue = this.queues.get(service) ?? []; | ||
| const next = queue.shift(); | ||
| if (next) { | ||
| this.queues.set(service, queue); | ||
| next(); | ||
| } | ||
| } |
There was a problem hiding this comment.
Bug: queued callers bypass the counter, allowing more than maxRequests concurrent calls.
When release fires it decrements the counter and dequeues the next waiter, but the dequeued caller never increments the counter. This causes under-counting.
Trace with maxRequests = 2:
- A, B acquire → counter = 2. C queues.
- A's release fires → counter = 2→1, C dequeued and starts running.
- D arrives → counter 1 < 2, proceeds → counter = 2.
- Now B, C, D are all in-flight — 3 concurrent, exceeding the limit.
The fix: keep the counter stable when handing a slot to a queued waiter (don't decrement if you're immediately dispatching).
Proposed fix
private release(service: string): void {
- const count = this.counters.get(service) ?? 0;
- this.counters.set(service, Math.max(0, count - 1));
const queue = this.queues.get(service) ?? [];
const next = queue.shift();
if (next) {
this.queues.set(service, queue);
- next();
+ next(); // hand the slot to the queued caller (counter stays the same)
+ } else {
+ const count = this.counters.get(service) ?? 0;
+ this.counters.set(service, Math.max(0, count - 1));
}
}📝 Committable suggestion
‼️ IMPORTANT
Carefully review the code before committing. Ensure that it accurately replaces the highlighted code, contains no missing lines, and has no issues with indentation. Thoroughly test & benchmark the code to ensure it meets the requirements.
| private release(service: string): void { | |
| const count = this.counters.get(service) ?? 0; | |
| this.counters.set(service, Math.max(0, count - 1)); | |
| const queue = this.queues.get(service) ?? []; | |
| const next = queue.shift(); | |
| if (next) { | |
| this.queues.set(service, queue); | |
| next(); | |
| } | |
| } | |
| private release(service: string): void { | |
| const queue = this.queues.get(service) ?? []; | |
| const next = queue.shift(); | |
| if (next) { | |
| this.queues.set(service, queue); | |
| next(); // hand the slot to the queued caller (counter stays the same) | |
| } else { | |
| const count = this.counters.get(service) ?? 0; | |
| this.counters.set(service, Math.max(0, count - 1)); | |
| } | |
| } |
🤖 Prompt for AI Agents
Verify each finding against the current code and only fix it if needed.
In `@src/sdk/rate-limiter.ts` around lines 48 - 57, The release method currently
decrements the counter even when immediately dispatching a queued waiter,
causing under-counting; change release (the release(service: string) method that
uses this.counters and this.queues) so that if there is a queued callback (next)
you do NOT decrement the counter—just hand the slot to the queued waiter by
calling next() and leave this.counters.get(service) unchanged; only decrement
(or set Math.max(0, count-1)) when there is no queued waiter to consume the
freed slot. Ensure the logic updates this.queues and this.counters consistently
so concurrent in-flight count never exceeds maxRequests.
src/sdk/runtime.ts
Outdated
| export function createSDKRuntime(context: FullContext): SDKRuntime { | ||
| const limiter = new RateLimiter(); |
There was a problem hiding this comment.
Rate limiter is scoped per execute() call, not per server lifetime.
createSDKRuntime is called on every execute() invocation (see src/server/factory.ts Lines 172-173), which means a fresh RateLimiter is created each time. Rate limiting within a single execute() call works, but across multiple execute() calls (the main quota-exhaustion vector) it provides no protection. Consider hoisting the RateLimiter to the server or factory scope so it persists across calls.
♻️ Suggested approach
-export function createSDKRuntime(context: FullContext): SDKRuntime {
- const limiter = new RateLimiter();
+export function createSDKRuntime(context: FullContext, limiter: RateLimiter): SDKRuntime {Then in src/server/factory.ts, create the limiter once alongside the sandbox:
const sandbox = new NodeSandbox();
+ const limiter = new RateLimiter();
...
- const sdk = createSDKRuntime(context);
+ const sdk = createSDKRuntime(context, limiter);🤖 Prompt for AI Agents
Verify each finding against the current code and only fix it if needed.
In `@src/sdk/runtime.ts` around lines 13 - 14, The RateLimiter is being
instantiated inside createSDKRuntime which is invoked per execute(), so hoist
the limiter to a longer-lived scope: instantiate a single RateLimiter in the
server/factory initialization where the sandbox is created (the same place that
constructs the per-server environment) and pass that limiter into
createSDKRuntime (or into the SDKRuntime constructor) instead of creating a new
one there; update createSDKRuntime and any callers of execute() to accept/use
the shared RateLimiter instance (referencing createSDKRuntime, RateLimiter, and
execute in your changes).
| // Wrap user code in async IIFE to support top-level await and return values | ||
| const wrappedCode = `(async () => { ${code} })()`; | ||
|
|
||
| try { | ||
| const script = new Script(wrappedCode, { | ||
| filename: 'agent-code.js', | ||
| }); | ||
|
|
||
| const result = await script.runInContext(sandboxContext, { | ||
| timeout: this.timeout, | ||
| }); | ||
|
|
||
| return { result, logs }; |
There was a problem hiding this comment.
Critical: runInContext timeout does not cover async execution — the sandbox can hang indefinitely.
When the script is an async IIFE, script.runInContext(ctx, { timeout }) synchronously creates and returns a Promise in microseconds — the timeout governs only that synchronous slice. The real work happens after await result (line 69), which has no time bound. Any while(true) { await Promise.resolve(); } or long-running awaited SDK call will hang the host process forever.
Fix: race the returned Promise against an explicit host-side timer (the sandbox's own setTimeout is undefined, so use the real one from the outer Node.js scope):
🔒 Proposed fix — enforce wall-clock timeout on async result
- const result = await script.runInContext(sandboxContext, {
- timeout: this.timeout,
- });
-
- return { result, logs };
+ // timeout option only covers synchronous script start-up, not async continuations
+ const sandboxPromise = script.runInContext(sandboxContext, {
+ timeout: this.timeout,
+ }) as Promise<unknown>;
+
+ // Enforce a wall-clock deadline on the full async execution
+ const timeoutPromise = new Promise<never>((_resolve, reject) => {
+ // Use the real Node.js setTimeout, NOT the blocked sandbox one
+ const id = setTimeout(
+ () => reject(new Error(`Execution timed out after ${this.timeout}ms`)),
+ this.timeout,
+ );
+ // Allow the Node process to exit even if this timer is still pending
+ if (typeof id === 'object' && id.unref) id.unref();
+ });
+
+ const result = await Promise.race([sandboxPromise, timeoutPromise]);
+ return { result, logs };🤖 Prompt for AI Agents
Verify each finding against the current code and only fix it if needed.
In `@src/sdk/sandbox-node.ts` around lines 61 - 73, The async IIFE returned by
Script.runInContext can hang because runInContext only times the synchronous
part; update the code around Script(wrappedCode...) / script.runInContext(...)
to race the returned Promise (the variable currently assigned to result) against
a host-side timer using the real Node setTimeout (outer scope) since the
sandbox's setTimeout is undefined; implement Promise.race between
script.runInContext(...) and a Promise that rejects after this.timeout ms with a
clear timeout error, ensure the timer is cleared on resolution/rejection, and
return/log that timeout error instead of allowing an indefinite hang (refer to
Script, runInContext, sandboxContext, wrappedCode, timeout, result, logs).
🧩 Analysis chain
🌐 Web query:
Node.js vm.Script runInContext timeout option async promise behavior
💡 Result:
timeoutinscript.runInContext()(andvm.runInContext()/runInNewContext()) only limits synchronous execution by default; code can “escape” the timeout by scheduling work viaPromise/async(microtasks) that runs after the VM call returns. (nodejs.org)- To make Promise/
asyncmicrotasks run before returning—and therefore be covered bytimeout/breakOnSigint—create/use the context withmicrotaskMode: 'afterEvaluate'(or pass it where applicable). (nodejs.org) - Even with
microtaskMode: 'afterEvaluate', callbacks scheduled through APIs likesetTimeout,setImmediate,process.nextTick,queueMicrotask, etc. (if you expose them in the sandbox) go onto global/shared queues, so they’re not controlled by the VMtimeout. (nodejs.org) - With
microtaskMode: 'afterEvaluate', each VM context has its own microtask queue, which can make “cross-context” Promise usage surprising: a Promise created inside the VM and awaited outside may not progress unless the VM context gets another turn (e.g., anotherrunInContext()/ moduleevaluate()call) to drain its microtask queue. (nodejs.org)
Citations:
- 1: https://nodejs.org/api/vm.html?utm_source=openai
- 2: https://nodejs.org/api/vm.html?utm_source=openai
- 3: https://nodejs.org/api/vm.html?utm_source=openai
- 4: https://nodejs.org/api/vm.html?utm_source=openai
🌐 Web query:
Node.js vm module timeout execution async IIFE documentation
💡 Result:
In Node.js node:vm, the timeout option on vm.runInNewContext(), vm.runInContext(), and script.runIn*() limits only the synchronous JavaScript execution of the evaluated script by default. Code that continues via Promises / async functions can “escape” the timeout because their continuations run later, after the main evaluation stack finishes. (nodejs.org)
To include Promise/async microtasks in the same timeout window, create/run the context with:
microtaskMode: 'afterEvaluate'
This makes microtasks scheduled by Promises/async run immediately after evaluation and be subject to timeout. (nodejs.org)
Caveats from the docs (relevant to an async IIFE like (async () => { ... })()):
- If you pass in functions/promises created in the outer (main) context, their callbacks may be queued on the global microtask queue and can still escape the vm timeout. (nodejs.org)
- If you expose scheduling APIs like
setTimeout,setImmediate,process.nextTick,queueMicrotask, etc. inside the vm context, callbacks go to shared/global queues and are not controllable by the vmtimeout. (nodejs.org)
Where this is documented: Node.js VM docs → “Timeout interactions with asynchronous tasks and Promises”. (nodejs.org)
Citations:
- 1: https://nodejs.org/api/vm.html?utm_source=openai
- 2: https://nodejs.org/api/vm.html?utm_source=openai
- 3: https://nodejs.org/api/vm.html?utm_source=openai
- 4: https://nodejs.org/api/vm.html?utm_source=openai
- 5: https://nodejs.org/api/vm.html?utm_source=openai
🏁 Script executed:
# Let me also check the Node.js documentation and source for timeout behavior
node --versionRepository: AojdevStudio/gdrive
Length of output: 70
🏁 Script executed:
head -n 85 src/sdk/sandbox-node.ts | tail -n 30Repository: AojdevStudio/gdrive
Length of output: 840
🏁 Script executed:
# Check the blocking list for setTimeout
sed -n '40,60p' src/sdk/sandbox-node.tsRepository: AojdevStudio/gdrive
Length of output: 572
runInContext timeout does not apply to async execution — infinite loops can hang indefinitely.
The timeout option on script.runInContext only guards the synchronous portion of the script. Because the user code is wrapped in an async IIFE, the script synchronously creates a Promise and returns it in microseconds — well within any timeout. The await result on line 69 then waits for that Promise with no time bound. Malicious or buggy async code (e.g., while(true) { await Promise.resolve(); }) will hang the worker process forever.
To enforce an actual wall-clock deadline, race the awaited Promise against an explicit Promise.race with a timer:
Proposed fix — enforce timeout on the async result
- const result = await script.runInContext(sandboxContext, {
- timeout: this.timeout,
- });
-
- return { result, logs };
+ const rawResult = script.runInContext(sandboxContext, {
+ timeout: this.timeout, // guards synchronous start-up only
+ });
+
+ const timeoutPromise = new Promise<never>((_, reject) =>
+ // Use the host's real setTimeout (not the blocked sandbox one)
+ // eslint-disable-next-line no-restricted-globals
+ setTimeout(() => reject(new Error(`Execution timed out after ${this.timeout}ms`)), this.timeout)
+ );
+
+ const result = await Promise.race([rawResult, timeoutPromise]);
+ return { result, logs };Note: setTimeout used here is from the host Node.js scope (outside the sandbox), not the blocked one injected into the context, so it works correctly.
📝 Committable suggestion
‼️ IMPORTANT
Carefully review the code before committing. Ensure that it accurately replaces the highlighted code, contains no missing lines, and has no issues with indentation. Thoroughly test & benchmark the code to ensure it meets the requirements.
| // Wrap user code in async IIFE to support top-level await and return values | |
| const wrappedCode = `(async () => { ${code} })()`; | |
| try { | |
| const script = new Script(wrappedCode, { | |
| filename: 'agent-code.js', | |
| }); | |
| const result = await script.runInContext(sandboxContext, { | |
| timeout: this.timeout, | |
| }); | |
| return { result, logs }; | |
| // Wrap user code in async IIFE to support top-level await and return values | |
| const wrappedCode = `(async () => { ${code} })()`; | |
| try { | |
| const script = new Script(wrappedCode, { | |
| filename: 'agent-code.js', | |
| }); | |
| const rawResult = script.runInContext(sandboxContext, { | |
| timeout: this.timeout, // guards synchronous start-up only | |
| }); | |
| const timeoutPromise = new Promise<never>((_, reject) => | |
| // Use the host's real setTimeout (not the blocked sandbox one) | |
| // eslint-disable-next-line no-restricted-globals | |
| setTimeout(() => reject(new Error(`Execution timed out after ${this.timeout}ms`)), this.timeout) | |
| ); | |
| const result = await Promise.race([rawResult, timeoutPromise]); | |
| return { result, logs }; |
🤖 Prompt for AI Agents
Verify each finding against the current code and only fix it if needed.
In `@src/sdk/sandbox-node.ts` around lines 61 - 73, The async IIFE returned by
Script.runInContext (wrappedCode) yields a Promise that runInContext's timeout
does not enforce, so awaiting result can hang; modify the code after calling
script.runInContext(sandboxContext, { timeout: this.timeout }) to race the
returned Promise against a host-side timeout Promise (using the host setTimeout)
via Promise.race, clear the timer on resolution, and reject/throw or return a
clear timeout error when the timer elapses; update the flow around variables
result and logs to handle the timed-out case (same return shape) and keep
references to Script, sandboxContext, and this.timeout for locating the change.
| async function importKey(rawKey: string): Promise<CryptoKey> { | ||
| // Key is expected as a base64-encoded 32-byte value (matches GDRIVE_TOKEN_ENCRYPTION_KEY format) | ||
| const keyBytes = Uint8Array.from(atob(rawKey), (c) => c.charCodeAt(0)); | ||
| return crypto.subtle.importKey('raw', keyBytes, ALGORITHM, false, ['encrypt', 'decrypt']); | ||
| } |
There was a problem hiding this comment.
Validate key length before crypto.subtle.importKey.
The current path accepts any base64-decoded key length that WebCrypto allows. Enforce exactly 32 bytes to match expected token-key strength.
Proposed patch
async function importKey(rawKey: string): Promise<CryptoKey> {
// Key is expected as a base64-encoded 32-byte value (matches GDRIVE_TOKEN_ENCRYPTION_KEY format)
- const keyBytes = Uint8Array.from(atob(rawKey), (c) => c.charCodeAt(0));
+ const keyBytes = Uint8Array.from(atob(rawKey.trim()), (c) => c.charCodeAt(0));
+ if (keyBytes.byteLength !== 32) {
+ throw new Error(
+ `Invalid encryption key length: expected 32 bytes after base64 decode, got ${keyBytes.byteLength}`
+ );
+ }
return crypto.subtle.importKey('raw', keyBytes, ALGORITHM, false, ['encrypt', 'decrypt']);
}Based on learnings: "Require GDRIVE_TOKEN_ENCRYPTION_KEY environment variable (32-byte base64 key). Generate with: openssl rand -base64 32."
🤖 Prompt for AI Agents
Verify each finding against the current code and only fix it if needed.
In `@src/storage/kv-store.ts` around lines 18 - 22, The importKey function
currently accepts any base64-decoded length; before calling
crypto.subtle.importKey you must validate that the decoded key (rawKey ->
keyBytes in importKey) is exactly 32 bytes and throw a clear error if not (e.g.,
"invalid encryption key length: expected 32 bytes"); locate the importKey
function and perform the length check on keyBytes.length, keeping the same
ALGORITHM and usages (encrypt, decrypt) if valid.
| async invalidate(pattern: string): Promise<void> { | ||
| // KV does not support pattern-based deletion. | ||
| // For exact key invalidation, pass the full key as pattern. | ||
| // This is a best-effort: only works when pattern equals an exact key. | ||
| try { | ||
| await this.kv.put(pattern, JSON.stringify(null), { expirationTtl: 1 }); | ||
| } catch { |
There was a problem hiding this comment.
🧩 Analysis chain
🏁 Script executed:
#!/bin/bash
set -euo pipefail
echo "=== Searching for KV interface and invalidation patterns ==="
rg -nP 'interface KVNamespace|expirationTtl:\s*1|invalidate\(pattern' src/ -C 3 2>/dev/null || echo "Pattern search completed"
echo ""
echo "=== Reading the invalidate method from kv-store.ts ==="
cat -n src/storage/kv-store.ts | sed -n '80,95p' 2>/dev/null || echo "File not found or section unavailable"
echo ""
echo "=== Checking for KVNamespace type definitions ==="
fd '.*\.ts$' src/ -x grep -l 'KVNamespace\|interface.*KV' {} \; 2>/dev/null | head -5 || echo "No type definitions found"Repository: AojdevStudio/gdrive
Length of output: 6306
🌐 Web query:
Cloudflare Workers KVNamespace.put() minimum expirationTtl seconds and delete() method support
💡 Result:
KVNamespace.put(..., { expirationTtl })minimum:expirationTtlis in seconds from now, and the minimum allowed value is 60 seconds (targets < 60s in the future aren’t supported). [1]KVNamespace.delete()support: Yes—Workers KV bindings supportdelete(key): Promise<void>. Deleting a non-existent key still resolves successfully (and deletion may take time to propagate). [2]
Sources: [1] Cloudflare Workers KV “Write key-value pairs” docs, [2] Cloudflare Workers KV “Delete key-value pairs” docs.
invalidate() fails silently with invalid expirationTtl: 1.
Cloudflare KV requires expirationTtl >= 60 seconds; values below 60 are rejected, and errors are silently swallowed. Two fixes are needed:
-
Update
KVNamespaceinterface insrc/auth/workers-auth.tsto include thedelete()method:delete(key: string): Promise<void>;
-
Update
invalidate()to usedelete()when available, with a conservative TTL fallback:
Proposed changes
src/auth/workers-auth.ts — Add to KVNamespace interface:
export interface KVNamespace {
get(key: string): Promise<string | null>;
put(key: string, value: string, options?: { expirationTtl?: number }): Promise<void>;
+ delete(key: string): Promise<void>;
}src/storage/kv-store.ts — Update invalidate():
async invalidate(pattern: string): Promise<void> {
try {
+ const kvWithDelete = this.kv as KVNamespace & { delete?: (key: string) => Promise<void> };
+ if (typeof kvWithDelete.delete === 'function') {
+ await kvWithDelete.delete(pattern);
+ return;
+ }
await this.kv.put(pattern, JSON.stringify(null), { expirationTtl: 60 });
} catch {
// Ignore
}
}🤖 Prompt for AI Agents
Verify each finding against the current code and only fix it if needed.
In `@src/storage/kv-store.ts` around lines 84 - 90, The invalidate() call
currently tries to put a JSON null with expirationTtl: 1 (invalid for Cloudflare
KV) and swallows errors; update the KVNamespace declaration in workers-auth.ts
to include delete(key: string): Promise<void> so callers can use native
deletion, and modify src/storage/kv-store.ts invalidate(pattern: string) to
prefer calling this.kv.delete(pattern) when available; if delete is not present,
use this.kv.put with a conservative expirationTtl >= 60 (e.g., 60) as a fallback
and surface/log any errors instead of silently ignoring them.
tsconfig.worker.json
Outdated
| "src/auth/workers-auth.ts", | ||
| "src/storage/kv-store.ts", | ||
| "src/sdk/**/*.ts", | ||
| "src/modules/**/*.ts" |
There was a problem hiding this comment.
🧩 Analysis chain
🏁 Script executed:
#!/bin/bash
# Detect Node.js-specific API surface in src/modules/ that would break a Workers build
rg -n --type=ts \
"process\.|require\(|Buffer\b|__dirname|__filename|node:fs|node:path|node:os|createReadStream|createWriteStream|\.readFileSync|\.writeFileSync" \
src/modules/Repository: AojdevStudio/gdrive
Length of output: 948
Fix Node.js API incompatibility in src/modules/ before including in Workers build.
src/modules/ contains Node.js-specific APIs that will fail in Cloudflare Workers:
Buffer.from()insrc/modules/gmail/utils.ts:59,154andsrc/modules/gmail/read.ts:72—Bufferdoes not exist in Workers without polyfillingprocess.envinsrc/modules/calendar/contacts.ts:78— assumes Node environment, not Workers bindings
Since tsconfig.worker.json includes src/modules/**/*.ts but strips Node types with types: [], these modules will either produce type errors at build time or fail silently at runtime. Refactor these modules to remove Node dependencies, provide Workers-compatible polyfills, or exclude them from the Workers build.
🤖 Prompt for AI Agents
Verify each finding against the current code and only fix it if needed.
In `@tsconfig.worker.json` at line 19, tsconfig.worker.json currently includes
Node-specific modules that use Buffer.from and process.env which break in
Cloudflare Workers; update the offending code and/or build config: replace
Buffer.from(..., 'base64') usages with Workers-safe alternatives (e.g., decode
base64 via atob then create a Uint8Array via Uint8Array.from(atob(str), c =>
c.charCodeAt(0)) or use TextEncoder/TextDecoder where appropriate) in the
functions that call Buffer.from in gmail utility/read functions, and replace
direct reads of process.env in calendar contact code with Workers bindings or
pass the value in as a parameter (use global binding names or an injected config
object) so runtime does not depend on Node globals; alternatively, if some
modules must remain Node-only, remove them from the Workers build by excluding
the module glob from the worker tsconfig include so the Worker bundle no longer
compiles those files.
| export default { | ||
| // eslint-disable-next-line @typescript-eslint/no-explicit-any | ||
| async fetch(request: Request, env: Env, _ctx: any): Promise<Response> { | ||
| // Only handle POST requests to /mcp (or root) | ||
| const url = new URL(request.url); | ||
| if (request.method !== 'POST' || (url.pathname !== '/' && url.pathname !== '/mcp')) { | ||
| return new Response('gdrive-mcp Worker v4.0.0-alpha\nPOST /mcp to connect.', { | ||
| status: url.pathname === '/' ? 200 : 404, | ||
| }); | ||
| } |
There was a problem hiding this comment.
Missing bearer token authentication — deployed Worker is open to unauthenticated requests.
The migration plan specifies bearer token validation (MCP_BEARER_TOKEN secret) and origin allowlist as security requirements, but neither is implemented. The live endpoint (gdrive-mcp.chinyereirondi.workers.dev) is exposed to any POST request, granting full Google Workspace access (read email, send email, delete files, etc.).
🔒 Add bearer token gate
export interface Env {
GDRIVE_KV: KVNamespace;
GDRIVE_CLIENT_ID: string;
GDRIVE_CLIENT_SECRET: string;
LOG_LEVEL?: string;
+ MCP_BEARER_TOKEN?: string;
}
// ...
async fetch(request: Request, env: Env, _ctx: any): Promise<Response> {
const url = new URL(request.url);
if (request.method !== 'POST' || (url.pathname !== '/' && url.pathname !== '/mcp')) {
return new Response('gdrive-mcp Worker v4.0.0-alpha\nPOST /mcp to connect.', {
status: url.pathname === '/' ? 200 : 404,
});
}
+ // Bearer token gate
+ if (env.MCP_BEARER_TOKEN) {
+ const auth = request.headers.get('Authorization');
+ if (auth !== `Bearer ${env.MCP_BEARER_TOKEN}`) {
+ return new Response(
+ JSON.stringify({ jsonrpc: '2.0', error: { code: -32001, message: 'Unauthorized' }, id: null }),
+ { status: 401, headers: { 'Content-Type': 'application/json' } }
+ );
+ }
+ }
+
const logger = makeLogger();🤖 Prompt for AI Agents
Verify each finding against the current code and only fix it if needed.
In `@worker.ts` around lines 53 - 62, The fetch handler currently allows any POST
to / or /mcp; add bearer token authentication by reading the Authorization
header inside fetch and comparing the bearer token to env.MCP_BEARER_TOKEN
(reject with 401 if missing/invalid), and also enforce an origin allowlist by
checking request.headers.get('Origin') against a comma-separated env variable
like MCP_ORIGIN_ALLOWLIST (reject with 403 if not allowed); update fetch to
perform these checks before processing the request so unauthenticated or
disallowed origins never reach the existing request handling logic.
worker.ts
Outdated
| // Stateless mode: omit sessionIdGenerator entirely so the transport doesn't | ||
| // maintain per-session state (required for Workers' ephemeral environment). | ||
| // eslint-disable-next-line @typescript-eslint/no-explicit-any | ||
| const transport = new WebStandardStreamableHTTPServerTransport({} as any); | ||
|
|
||
| await server.connect(transport); | ||
|
|
||
| return transport.handleRequest(request); |
There was a problem hiding this comment.
🧩 Analysis chain
🏁 Script executed:
# Search for WebStandardStreamableHTTPServerTransport definition
rg "class WebStandardStreamableHTTPServerTransport|interface.*WebStandardStreamableHTTPServerTransport" -t tsRepository: AojdevStudio/gdrive
Length of output: 45
🏁 Script executed:
# Also search for the constructor and sessionIdGenerator references
rg "sessionIdGenerator" -t ts -B 2 -A 2Repository: AojdevStudio/gdrive
Length of output: 624
🏁 Script executed:
# Check for migration plan or documentation files
fd -e md -e txt | xargs grep -l "sessionIdGenerator\|stateless mode" 2>/dev/null | head -20Repository: AojdevStudio/gdrive
Length of output: 223
🌐 Web query:
WebStandardStreamableHTTPServerTransport MCP SDK sessionIdGenerator stateless mode
💡 Result:
In the MCP TypeScript SDK, WebStandardStreamableHTTPServerTransport (and the Node wrapper) switches between “stateful” and “stateless” purely based on sessionIdGenerator:
- Stateful mode: provide a generator (e.g.
() => crypto.randomUUID()); the server returns a session id header and later requests are validated against it. [1][2] - Stateless mode: set
sessionIdGenerator: undefined; no session id is generated/returned and no session validation is performed. [1][2]
Minimal stateless pattern (recommended): create a fresh transport per HTTP request to avoid cross-request state like _initialized / request routing maps being shared between clients. [1][2]
const transport = new WebStandardStreamableHTTPServerTransport({
sessionIdGenerator: undefined, // stateless
});
await server.connect(transport);
return transport.handleRequest(request);[1][2]
Express example (Node wrapper) does the same:
new NodeStreamableHTTPServerTransport({ sessionIdGenerator: undefined }) per POST, and typically returns 405 for GET/DELETE in stateless setups. [3]
Implication: if you need a GET SSE stream, server-initiated messages, resumability, or multi-request continuity, use stateful mode (i.e., provide sessionIdGenerator). [1][2]
Sources: [1] [2] [3]
🏁 Script executed:
# Read the migration plan to see specific guidance
head -100 docs/planning/streamable-http-cloudflare-migration.mdRepository: AojdevStudio/gdrive
Length of output: 4681
🏁 Script executed:
# Search for any WebStandardStreamableHTTPServerTransport usage in the codebase
rg "new WebStandardStreamableHTTPServerTransport" -t ts -B 3 -A 3Repository: AojdevStudio/gdrive
Length of output: 479
🏁 Script executed:
# Look for any test files that show proper constructor usage
fd -e test.ts -e spec.ts | xargs grep -l "WebStandardStreamableHTTPServerTransport" 2>/dev/nullRepository: AojdevStudio/gdrive
Length of output: 47
Use { sessionIdGenerator: undefined } instead of {} for stateless mode.
The MCP SDK documentation and this repo's migration plan both specify that stateless mode requires explicitly setting sessionIdGenerator: undefined. Passing an empty object {} is not equivalent—the constructor may not default the property correctly. Replace with:
Recommended fix
const transport = new WebStandardStreamableHTTPServerTransport({
sessionIdGenerator: undefined,
});
Remove the as any cast once the correct options type is used.
🤖 Prompt for AI Agents
Verify each finding against the current code and only fix it if needed.
In `@worker.ts` around lines 98 - 105, The transport instantiation uses an empty
casted object which doesn't explicitly set stateless mode; replace the "{} as
any" in the WebStandardStreamableHTTPServerTransport constructor with an options
object that sets sessionIdGenerator: undefined (i.e., pass { sessionIdGenerator:
undefined }) and remove the as any cast so the correct options type is used;
keep the rest of the flow (await server.connect(transport); return
transport.handleRequest(request);) unchanged.
There was a problem hiding this comment.
💡 Codex Review
Here are some automated review suggestions for this pull request.
Reviewed commit: 2565d6e5c9
ℹ️ About Codex in GitHub
Your team has set up Codex to review pull requests in this repo. Reviews are triggered when you
- Open a pull request for review
- Mark a draft as ready
- Comment "@codex review".
If Codex has suggestions, it will comment; otherwise it will react with 👍.
Codex can also answer questions or update the PR. Try commenting "@codex address that feedback".
| process.exit(1); | ||
| } | ||
|
|
||
| await tokenManager.saveTokens(tokens); |
There was a problem hiding this comment.
Set target key version before re-encrypting tokens
rotateKey() calculates newVersion and verifies the new env key, but it immediately calls tokenManager.saveTokens(tokens) without switching the active key version first. Because TokenManager.saveTokens encrypts with the current key version, this command reports a successful rotation while leaving tokens encrypted under the old key; after operators update GDRIVE_TOKEN_CURRENT_KEY_VERSION and eventually retire the old key, token decryption can fail.
Useful? React with 👍 / 👎.
src/auth/workers-auth.ts
Outdated
| const tokens = JSON.parse(raw) as WorkersTokenData; | ||
| const needsRefresh = Date.now() >= tokens.expiry_date - preemptiveMs; | ||
|
|
||
| if (!needsRefresh) { | ||
| return tokens.access_token; |
There was a problem hiding this comment.
Reject encrypted token blobs in Workers auth path
getValidAccessToken() assumes the KV payload is plaintext WorkersTokenData and returns tokens.access_token directly, but the documented bootstrap path points users to run auth, and authenticateAndSave() stores tokens via TokenManager.saveTokens (encrypted versioned format). If that saved blob is uploaded to KV, expiry_date/access_token are missing, so this code can return an undefined bearer token and all Google API calls fail at runtime.
Useful? React with 👍 / 👎.
| if (next) { | ||
| this.queues.set(service, queue); | ||
| next(); |
There was a problem hiding this comment.
Re-acquire slot when waking queued rate-limited call
When release() wakes a queued request, it calls next() without incrementing the service counter for that resumed request. After queueing begins, counters under-reports in-flight calls, so subsequent acquire() calls can admit more than maxRequests concurrent operations, weakening the quota protection this limiter is meant to enforce.
Useful? React with 👍 / 👎.
- Bump version badge to v4.0.0 - Add Cloudflare Workers badge; retire Docker from hero - New "What's New in v4.0.0" section with zero-install narrative - Restructure Quick Start: Remote (Cloudflare) → Local → Docker - Add `--scope user` and `--scope project` Claude Code CLI commands with explanation of when to use each - Credit @mattzcarey (x.com/mattzcarey) for Cloudflare blog reference - Link Cloudflare "Code Mode MCP" article as inspiration - Update The Story section with v4 closing paragraph - Add Cloudflare Workers to roadmap checklist Co-Authored-By: Claude Sonnet 4.6 <noreply@anthropic.com>
📊 Type Coverage ReportType Coverage: 98.74% This PR's TypeScript type coverage analysis is complete. |
- Add INSTALL.md: written for AI agents (Claude Code) to execute interactively. Covers 6 phases: prerequisites, Google Cloud project setup, OAuth credentials, local auth, Cloudflare KV + deploy, and Claude Code configuration. Uses ASK:/USER ACTION:/run markers so the agent knows exactly what to do vs what needs browser interaction. - Fix README Quick Start Option 1: replace false "zero-install" claim with accurate prerequisites (Cloudflare account, Node.js 18+, Wrangler CLI). Expand to 7 concrete steps matching actual deploy flow: local auth → KV upload → secrets → wrangler deploy. - Add `just install` recipe: one command launches Claude Code with INSTALL.md as context for a fully guided interactive setup. Co-Authored-By: Claude Sonnet 4.6 <noreply@anthropic.com>
📊 Type Coverage ReportType Coverage: 98.74% This PR's TypeScript type coverage analysis is complete. |
- Add just CLI as prerequisite for Option 1 (required for just install) - Link: just.systems/man/en/packages.html with install commands for macOS (brew), Windows (winget), and cross-platform (cargo) - Hyperlink Node.js 18+ → nodejs.org/en/download - Hyperlink Wrangler CLI → developers.cloudflare.com/workers/wrangler - Hyperlink Google Cloud project → local setup guide (both Option 1 & 2) Co-Authored-By: Claude Sonnet 4.6 <noreply@anthropic.com>
📊 Type Coverage ReportType Coverage: 98.74% This PR's TypeScript type coverage analysis is complete. |
Remove the get-shit-done (GSD) agent framework, all associated commands, agents, hooks, and templates. Also remove Gemini CLI agent configs, superseded research docs, migration guide, and stale wrangler.toml. Add .wrangler/ and .claude/skills/ to .gitignore to prevent accidental commit of build cache and external skill symlinks. Co-Authored-By: AOJDevStudio
📊 Type Coverage ReportType Coverage: 98.74% This PR's TypeScript type coverage analysis is complete. |
- fix no-undef/curly failures in workers auth and KV crypto helpers\n- tighten stdio/bootstrap/sandbox control-flow braces for lint parity\n- remove unused worker declaration and keep request auth path lint-clean\n\nCo-Authored-By: AOJDevStudio
📊 Type Coverage ReportType Coverage: 98.73% This PR's TypeScript type coverage analysis is complete. |
- sample each iteration count multiple times and use median timing\n- relax ratio bounds to account for noisy shared CI runners\n- keep benchmark intent while avoiding false negatives\n\nCo-Authored-By: AOJDevStudio
📊 Type Coverage ReportType Coverage: 98.73% This PR's TypeScript type coverage analysis is complete. |
- add workers/server/sdk/storage unit tests used by CI coverage gate\n- share SDK rate limiter across execute calls and fix limiter release handoff\n- apply worker build include tweak and key-rotation save version fix\n\nCo-Authored-By: AOJDevStudio
📊 Type Coverage ReportType Coverage: 98.73% This PR's TypeScript type coverage analysis is complete. |
- download only *-scan-results artifacts for Security Summary\n- exclude dockerbuild artifacts that fail extraction in summary job\n- keep summary generation behavior unchanged\n\nCo-Authored-By: AOJDevStudio
📊 Type Coverage ReportType Coverage: 98.73% This PR's TypeScript type coverage analysis is complete. |
- replace multiline template literal with safe array join in github-script step\n- prevent YAML/script truncation that caused Unexpected end of input\n- retain identical comment body content\n\nCo-Authored-By: AOJDevStudio
📊 Type Coverage ReportType Coverage: 98.73% This PR's TypeScript type coverage analysis is complete. |
🔒 Security Scan SummaryGenerated on: Thu Feb 26 22:06:05 UTC 2026 Scan Results
Summary
Recommendations
Security report generated by Claude Code |
Summary
drive,sheets,forms,docs,gmail,calendar) with 2 tools:search(SDK spec query) andexecute(sandboxed JS with full googleapis SDK)worker.ts) withWebStandardStreamableHTTPServerTransportand KV-backed token storageBreaking Changes
The 6 legacy operation-based tools are replaced entirely. Any MCP client configurations referencing
drive,sheets,forms,docs,gmail, orcalendartools must be updated to use the newsearch/executetools.New Files
worker.tswrangler.tomltsconfig.worker.jsonsrc/sdk/src/server/src/storage/kv-store.tssrc/auth/workers-auth.tsTest Plan
npm run buildsucceeds cleanlynpm run type-checkpasses with no errorssearchtool returns valid SDK specsexecutetool runs sandboxed JS against googleapisNotes
breaking-changelabel needs to be created in the repo before it can be applied.Let CI run — merge decision after green.
🤖 Generated with Claude Code
Summary by CodeRabbit
New Features
Documentation
Chores