feat: Google Drive + Gmail API proxy routes#160
Conversation
Adds /api/google/gdrive/* and /api/google/email/* proxy routes: - GET /gdrive/files — list files (supports q, fields, pageSize, pageToken) - GET /gdrive/files/:id — file metadata - GET /gdrive/files/:id/content — download file bytes - GET /email/messages — list messages (supports q, maxResults, pageToken) - GET /email/messages/:id — get message (supports format) - GET /email/messages/:id/attachments/:aid — download attachment bytes Token source priority: KV rotation cache → 1Password broker → env var. Consumers: chittystorage (source inventory), chittyevidence-db (intake). Co-Authored-By: Claude Opus 4.6 (1M context) <noreply@anthropic.com>
Deploying with
|
| Status | Name | Latest Commit | Updated (UTC) |
|---|---|---|---|
| ✅ Deployment successful! View logs |
chittyconnect | c7c41d5 | Apr 24 2026, 10:07 AM |
|
Warning Rate limit exceeded
Your organization is not enrolled in usage-based pricing. Contact your admin to enable usage-based pricing to continue reviews beyond the rate limit, or try again in 33 minutes and 36 seconds. ⌛ How to resolve this issue?After the wait time has elapsed, a review can be triggered using the We recommend that you space out your commits to avoid hitting the rate limit. 🚦 How do rate limits work?CodeRabbit enforces hourly rate limits for each developer per organization. Our paid plans have higher rate limits than the trial, open-source and free plans. In all cases, we re-allow further reviews after a brief timeout. Please see our FAQ for further information. ℹ️ Review info⚙️ Run configurationConfiguration used: defaults Review profile: CHILL Plan: Pro Run ID: 📒 Files selected for processing (6)
📝 WalkthroughWalkthroughAdds a Google API integration and routes mounted at Changes
Sequence DiagramsequenceDiagram
participant Client
participant Router
participant KVCache as KV Cache
participant Rotation as SecretRotation
participant JWT as JWT Helper
participant GoogleAPI as Google API
Client->>Router: Request /api/google/...
Router->>KVCache: get("secret:gdrive:access_token")
alt Token cached
KVCache-->>Router: access_token
else Cache miss
Router->>Rotation: rotateGDriveToken()
Rotation->>KVCache: get("secret:gdrive:service_account")
alt Service account present
Rotation->>JWT: createJwt(claims, privateKeyPem)
JWT-->>Rotation: signed_jwt
Rotation->>GoogleAPI: POST /token (jwt_assertion)
GoogleAPI-->>Rotation: access_token
else Service account absent
Rotation->>KVCache: get("secret:gdrive:refresh_token")
Rotation->>GoogleAPI: POST /token (refresh_token)
GoogleAPI-->>Rotation: access_token
end
Rotation->>KVCache: put("secret:gdrive:access_token", token, ttl)
Rotation-->>Router: access_token
end
Router->>GoogleAPI: proxied request with Authorization: Bearer token
GoogleAPI-->>Router: response (JSON or binary)
Router-->>Client: proxied/normalized response
Estimated code review effort🎯 4 (Complex) | ⏱️ ~45 minutes Poem
🚥 Pre-merge checks | ✅ 3 | ❌ 2❌ Failed checks (1 warning, 1 inconclusive)
✅ Passed checks (3 passed)
✏️ Tip: You can configure your own custom pre-merge checks in the settings. ✨ Finishing Touches🧪 Generate unit tests (beta)
Thanks for using CodeRabbit! It's free for OSS, and your support helps us grow. If you like it, consider giving us a shout-out. Comment |
There was a problem hiding this comment.
Actionable comments posted: 2
🧹 Nitpick comments (2)
src/api/routes/google.js (2)
52-57: Unhandled JSON parse error on successful upstream responses.
await response.json()on Line 57 is not guarded — if Google ever returns a non-JSON body on a 2xx (e.g. HTML from an edge/captive-portal interstitial, or an empty body), this will throw out ofgoogleProxyand surface as a 500 with no context. The error-path on Line 53 already uses.catch(() => "")defensively; mirroring that on success keeps the proxy's behavior consistent.♻️ Proposed fix
- return { ok: true, data: await response.json() }; + try { + return { ok: true, data: await response.json() }; + } catch (err) { + return { ok: false, status: 502, error: `Google API returned non-JSON response: ${err.message}` }; + }🤖 Prompt for AI Agents
Verify each finding against the current code and only fix it if needed. In `@src/api/routes/google.js` around lines 52 - 57, The success branch currently calls await response.json() unguarded and can throw if the upstream returns non-JSON; update the googleProxy success path to defensively parse the body: first attempt response.json() with a catch that falls back to safely reading response.text() (using .catch(() => "") to mirror the error path) and return a consistent { ok: true, data: <parsed-or-text> } shape (or null/empty string when parsing fails). Locate this logic in googleProxy where response is handled and replace the direct await response.json() with the guarded parse-and-fallback approach.
28-37: Consider invalidating the KV-cached token on 401 from upstream.
getGoogleTokenalways returns the KV value when present. If the cached token is rotated out-of-band or becomes invalid before the 50-minute refresh, every request will 401 until the next rotation tick. A minimal improvement: whengoogleProxysees a 401 from Google, invalidatesecret:gdrive:access_tokenin KV (or trigger the rotation worker) so subsequent requests fall through to the broker/env fallback. Not blocking, but this is what usually bites in production after token rotation misfires.🤖 Prompt for AI Agents
Verify each finding against the current code and only fix it if needed. In `@src/api/routes/google.js` around lines 28 - 37, getGoogleToken currently always returns the KV cached token when present; update the upstream request handler (the googleProxy request flow that calls getGoogleToken) to detect HTTP 401 responses from Google and invalidate the cached token by calling env.CREDENTIAL_CACHE.delete("secret:gdrive:access_token") (or invoking the rotation worker) so subsequent calls to getGoogleToken will fall back to getCredential; locate the 401 handling in the googleProxy fetch/response processing and add a one-shot delete/rotation trigger using env.CREDENTIAL_CACHE to remove the stale token.
🤖 Prompt for all review comments with AI agents
Verify each finding against the current code and only fix it if needed.
Inline comments:
In `@src/api/routes/google.js`:
- Around line 178-181: The base64url decode logic using atob on the variable
data must first convert URL-safe chars, append '=' padding until length % 4 ===
0, and be wrapped in a try/catch to prevent atob InvalidCharacterError from
crashing requests; update the block that computes base64 and binary (variables
data, base64, binary) to (1) replace '-'/'_' as you already do, (2) add '='
padding to the base64 string until its length is a multiple of 4, and (3)
perform the atob conversion inside a try/catch that logs or handles the error
and fails gracefully instead of throwing a 500.
- Around line 103-122: The /gdrive/files/:fileId/content handler currently
always requests /files/{fileId}?alt=media which 403s for Google-native docs;
modify the handler (googleRoutes.get("/gdrive/files/:fileId/content")) to first
fetch file metadata (/files/{id}?fields=mimeType) and if mimeType is a native
Google type (e.g.,
application/vnd.google-apps.document/spreadsheet/presentation) call the export
endpoint (/files/{id}/export?mimeType=application/pdf or a configurable default)
and stream that response, otherwise continue to fetch alt=media for
binary-backed files; additionally, ensure all upstream URL interpolations use
encodeURIComponent for fileId (and similarly for messageId and attachmentId
elsewhere) before inserting into DRIVE_API or other proxied URLs to be defensive
against unsafe characters.
---
Nitpick comments:
In `@src/api/routes/google.js`:
- Around line 52-57: The success branch currently calls await response.json()
unguarded and can throw if the upstream returns non-JSON; update the googleProxy
success path to defensively parse the body: first attempt response.json() with a
catch that falls back to safely reading response.text() (using .catch(() => "")
to mirror the error path) and return a consistent { ok: true, data:
<parsed-or-text> } shape (or null/empty string when parsing fails). Locate this
logic in googleProxy where response is handled and replace the direct await
response.json() with the guarded parse-and-fallback approach.
- Around line 28-37: getGoogleToken currently always returns the KV cached token
when present; update the upstream request handler (the googleProxy request flow
that calls getGoogleToken) to detect HTTP 401 responses from Google and
invalidate the cached token by calling
env.CREDENTIAL_CACHE.delete("secret:gdrive:access_token") (or invoking the
rotation worker) so subsequent calls to getGoogleToken will fall back to
getCredential; locate the 401 handling in the googleProxy fetch/response
processing and add a one-shot delete/rotation trigger using env.CREDENTIAL_CACHE
to remove the stale token.
🪄 Autofix (Beta)
✅ Autofix completed
ℹ️ Review info
⚙️ Run configuration
Configuration used: defaults
Review profile: CHILL
Plan: Pro
Run ID: 61490df4-5e76-4771-bb46-3be7d3210d22
📒 Files selected for processing (2)
src/api/router.jssrc/api/routes/google.js
There was a problem hiding this comment.
Pull request overview
Adds first-class Google Drive and Gmail proxy routes under /api/google/* to support internal services that need Drive inventory and Gmail backfill access, using the existing credential broker with an additional KV-rotated token fast path.
Changes:
- Introduces
src/api/routes/google.jswith Drive Files and Gmail Messages/Attachments proxy endpoints. - Adds the new
googleRoutesrouter to the main API router at/api/google.
Reviewed changes
Copilot reviewed 2 out of 2 changed files in this pull request and generated 6 comments.
| File | Description |
|---|---|
| src/api/routes/google.js | Implements token retrieval and proxy handlers for Drive and Gmail endpoints, including streaming Drive downloads and decoding Gmail attachments. |
| src/api/router.js | Mounts the new Google proxy router at /api/google. |
💡 Add Copilot custom instructions for smarter, more guided reviews. Learn how to get started.
There was a problem hiding this comment.
💡 Codex Review
Here are some automated review suggestions for this pull request.
Reviewed commit: 2ffa0d7b0d
ℹ️ About Codex in GitHub
Your team has set up Codex to review pull requests in this repo. Reviews are triggered when you
- Open a pull request for review
- Mark a draft as ready
- Comment "@codex review".
If Codex has suggestions, it will comment; otherwise it will react with 👍.
Codex can also answer questions or update the PR. Try commenting "@codex address that feedback".
…-refresh - rotateGDriveToken now tries service account JWT flow first (stored in CREDENTIAL_CACHE KV at secret:gdrive:service_account), falls back to OAuth2 refresh token flow - jwt-helper.js: RS256 JWT signing using Web Crypto API (no npm deps) - Token auto-rotates every 50 min via existing cron schedule - No manual credential management needed — service account key in KV Co-Authored-By: Claude Opus 4.6 (1M context) <noreply@anthropic.com>
|
Note Unit test generation is a public access feature. Expect some limitations and changes as we gather feedback and continue to improve it. Generating unit tests... This may take up to 20 minutes. |
|
Note Autofix is a beta feature. Expect some limitations and changes as we gather feedback and continue to improve it. Fixes Applied SuccessfullyFixed 1 file(s) based on 2 unresolved review comments. Files modified:
Commit: The changes have been pushed to the Time taken: |
Fixed 1 file(s) based on 2 unresolved review comments. Co-authored-by: CodeRabbit <noreply@coderabbit.ai>
|
✅ Created PR with unit tests: #161 |
There was a problem hiding this comment.
Actionable comments posted: 3
🧹 Nitpick comments (5)
src/api/routes/google.js (3)
42-58: Add an upstream timeout so a hung Google API call can't tie up a request indefinitely.
fetchon Workers has a subrequest wall-clock, but an explicitAbortSignal.timeout(...)produces clean 504-style errors and keeps tail latency bounded across all three download paths (list, metadata, content, attachment). Applying it ingoogleProxycovers JSON proxies; the two streaming paths (/gdrive/files/:fileId/content, and the metadata fetch on line 113) would need the same treatment.- const response = await fetch(googleUrl, { - headers: { Authorization: `Bearer ${token}` }, - }); + const response = await fetch(googleUrl, { + headers: { Authorization: `Bearer ${token}` }, + signal: AbortSignal.timeout(15_000), + });🤖 Prompt for AI Agents
Verify each finding against the current code and only fix it if needed. In `@src/api/routes/google.js` around lines 42 - 58, The googleProxy function can hang indefinitely if the upstream Google API call stalls; wrap the fetch in an AbortSignal.timeout to bound wall-clock time and return a 504-style error on timeout. Modify googleProxy to create a timeout signal (using AbortSignal.timeout with a sensible ms value), pass that signal to fetch, catch the abort/timeout error from getGoogleToken/fetch, and when the signal triggers return { ok: false, status: 504, error: "Google API request timed out" } (preserve existing behavior for other non-OK responses); apply the same timeout pattern to the streaming endpoints (the file content route and the metadata fetch) so all three code paths use AbortSignal.timeout.
199-237: Attachment endpoint returnsapplication/octet-stream— consider accepting a MIME type hint.Gmail's
attachments.getresponse doesn't include the attachment's MIME type — it lives in the parent message'spayload.parts[].mimeType. Since consumers (chittyevidence-db intake) typically already have the message metadata, allow them to pass the expected type via?mimeType=so downstream storage and previewers don't have to re-detect it:- return new Response(binary, { - headers: { - "Content-Type": "application/octet-stream", - "Content-Length": String(size || binary.length), - }, - }); + const mimeType = c.req.query("mimeType") || "application/octet-stream"; + return new Response(binary, { + headers: { + "Content-Type": mimeType, + "Content-Length": String(binary.length), + }, + });Also note:
sizefrom Gmail is the decoded byte count, which should always equalbinary.length; usingbinary.lengthunconditionally removes a failure mode where a stale/mismatchedsizemis-frames the body.🤖 Prompt for AI Agents
Verify each finding against the current code and only fix it if needed. In `@src/api/routes/google.js` around lines 199 - 237, Update the attachment handler (googleRoutes.get("/email/messages/:messageId/attachments/:attachmentId")) to accept an optional mimeType query parameter from c.req.query() and use it for the response Content-Type header if present (fall back to "application/octet-stream" otherwise); always set Content-Length using binary.length (not the Gmail-provided size) to avoid stale size mismatches; keep the existing base64 → binary decoding flow and error handling using the existing variables (data, size, base64, binary) and ensure the final Response uses the chosen content type and binary.length for Content-Length.
131-139: Hard-coded PDF export loses fidelity for Sheets/Slides — consider makingexportMimeTypeconfigurable.Defaulting to
application/pdfworks for Docs but produces a less useful export for Sheets (xlsx is typically preferred) and Slides. Allow the caller to override via query param, e.g.?exportMimeType=application/vnd.openxmlformats-officedocument.spreadsheetml.sheet, and pick a type-appropriate default when omitted.♻️ Proposed change
- let downloadUrl; - if (googleNativeTypes.includes(mimeType)) { - // Use export endpoint for Google-native files (export as PDF by default) - const exportMimeType = encodeURIComponent("application/pdf"); - downloadUrl = `${DRIVE_API}/files/${encodedFileId}/export?mimeType=${exportMimeType}`; - } else { + const defaultExportFor = { + "application/vnd.google-apps.document": "application/pdf", + "application/vnd.google-apps.spreadsheet": + "application/vnd.openxmlformats-officedocument.spreadsheetml.sheet", + "application/vnd.google-apps.presentation": + "application/vnd.openxmlformats-officedocument.presentationml.presentation", + }; + let downloadUrl; + if (defaultExportFor[mimeType]) { + const requested = c.req.query("exportMimeType") || defaultExportFor[mimeType]; + downloadUrl = `${DRIVE_API}/files/${encodedFileId}/export?mimeType=${encodeURIComponent(requested)}`; + } else { // Use alt=media for binary-backed files downloadUrl = `${DRIVE_API}/files/${encodedFileId}?alt=media`; }🤖 Prompt for AI Agents
Verify each finding against the current code and only fix it if needed. In `@src/api/routes/google.js` around lines 131 - 139, The current hard-coded exportMimeType = "application/pdf" in the download URL branch loses fidelity for Google Sheets/Slides; update the route to read an optional query parameter (e.g., req.query.exportMimeType) and, if absent, choose a sensible default based on the source mimeType: for Google Docs keep application/pdf, for Google Sheets use application/vnd.openxmlformats-officedocument.spreadsheetml.sheet, for Google Slides use application/vnd.openxmlformats-officedocument.presentationml.presentation (fall back to application/pdf for unknown native types). Ensure you URL-encode the chosen exportMimeType with encodeURIComponent before building downloadUrl and reference googleNativeTypes, exportMimeType, encodedFileId and DRIVE_API when making the change; optionally validate the provided query value before using it.src/services/jwt-helper.js (2)
45-60: PKCS#1 vs PKCS#8 mismatch is silently accepted by the regex.Line 47–48 strips both
BEGIN PRIVATE KEY(PKCS#8) andBEGIN RSA PRIVATE KEY(PKCS#1) headers, butcrypto.subtle.importKeyis hard-coded to"pkcs8"on line 54. A PKCS#1-encoded key will decode to bytes but fail import with a non-obviousDataError. Google service-accountprivate_keyvalues are always PKCS#8, so this is fine for the current caller; however, a quick guard makes failures self-explaining:🛡️ Proposed guard
async function importPrivateKey(pem) { + if (/BEGIN RSA PRIVATE KEY/.test(pem)) { + throw new Error("PKCS#1 RSA keys are not supported; convert to PKCS#8 (BEGIN PRIVATE KEY)"); + } const pemBody = pem - .replace(/-----BEGIN (RSA )?PRIVATE KEY-----/g, "") - .replace(/-----END (RSA )?PRIVATE KEY-----/g, "") + .replace(/-----BEGIN PRIVATE KEY-----/g, "") + .replace(/-----END PRIVATE KEY-----/g, "") .replace(/\s/g, "");🤖 Prompt for AI Agents
Verify each finding against the current code and only fix it if needed. In `@src/services/jwt-helper.js` around lines 45 - 60, importPrivateKey currently strips both PKCS#8 and PKCS#1 PEM headers but always calls crypto.subtle.importKey("pkcs8"), which causes a confusing DataError if a PKCS#1 ("-----BEGIN RSA PRIVATE KEY-----") key is passed; update importPrivateKey to inspect the original PEM header (look for "-----BEGIN RSA PRIVATE KEY-----" vs "-----BEGIN PRIVATE KEY-----") and if the RSA/PKCS#1 header is present throw a clear error (e.g., "PKCS#1 (RSA PRIVATE KEY) is not supported; please provide a PKCS#8 PRIVATE KEY or convert it") while only stripping and importing when the PKCS#8 header is detected, leaving the importKey call and parameters on the existing RSASSA-PKCS1-v1_5/SHA-256 usage unchanged.
41-41: Minor:String.fromCharCode(...data)can overflow the call stack for large buffers.For the JWT use case here (short header/payload/signature ≤ ~256 bytes for RSA‑2048), this is safe. If
base64urlis ever reused for larger byte arrays, the spread can exceed the JS argument limit (~65K on some engines). Consider chunking or a reducer if the helper becomes more general-purpose.🤖 Prompt for AI Agents
Verify each finding against the current code and only fix it if needed. In `@src/services/jwt-helper.js` at line 41, Replace the unsafe spread usage in the base64url helper (the line using String.fromCharCode(...data)) with a chunked conversion to avoid exceeding the JS argument limit: iterate over the Uint8Array in safe-sized slices (e.g. 0x8000 bytes) and call String.fromCharCode.apply(null, slice) for each chunk, concatenating to a single string, then call btoa on that string; update the code in the base64url/related function to use this chunked approach instead of the direct spread.
🤖 Prompt for all review comments with AI agents
Verify each finding against the current code and only fix it if needed.
Inline comments:
In `@src/api/routes/google.js`:
- Around line 149-154: The returned Response is always setting "Content-Length"
even when upstream lacks it; change the header construction around the new
Response(...) so you only include "Content-Length" when
response.headers.get("Content-Length") is truthy (omit the header if absent) and
also copy through "Content-Disposition" from response.headers (e.g.,
response.headers.get("Content-Disposition")) so downloaders receive a filename;
modify the header-building logic used where new Response(response.body, {
headers: { ... } }) is created in src/api/routes/google.js to conditionally add
Content-Length and to propagate Content-Disposition.
- Around line 19-37: Import the requireServiceToken middleware and register it
on the googleRoutes router so all routes validate service credentials: add an
import for requireServiceToken and then call googleRoutes.use("*",
requireServiceToken("google")) immediately after the googleRoutes = new Hono()
declaration (so the middleware runs for every route defined in this module and
enforces the "google" service token).
In `@src/services/secret-rotation.js`:
- Around line 227-234: Before constructing the JWT claims, validate that the
service-account object (sa) contains the expected fields and normalize scopes:
ensure sa.impersonate is present and non-empty (used for the sub claim) and
ensure sa.scopes exists and is a space-delimited string; if sa.scopes is an
array, join with a space, and if either required field is missing or invalid,
throw/return a descriptive error (e.g., "missing service-account field:
impersonate" or "invalid scopes: must be string or array of strings"). Insert
this validation/normalization immediately before the code that builds the claims
object (where claims is created) so claims.sub uses sa.impersonate and
claims.scope uses the normalized string; also add/update documentation/README
for the secret:gdrive:service_account KV shape to require { client_email,
private_key, impersonate, scopes }.
---
Nitpick comments:
In `@src/api/routes/google.js`:
- Around line 42-58: The googleProxy function can hang indefinitely if the
upstream Google API call stalls; wrap the fetch in an AbortSignal.timeout to
bound wall-clock time and return a 504-style error on timeout. Modify
googleProxy to create a timeout signal (using AbortSignal.timeout with a
sensible ms value), pass that signal to fetch, catch the abort/timeout error
from getGoogleToken/fetch, and when the signal triggers return { ok: false,
status: 504, error: "Google API request timed out" } (preserve existing behavior
for other non-OK responses); apply the same timeout pattern to the streaming
endpoints (the file content route and the metadata fetch) so all three code
paths use AbortSignal.timeout.
- Around line 199-237: Update the attachment handler
(googleRoutes.get("/email/messages/:messageId/attachments/:attachmentId")) to
accept an optional mimeType query parameter from c.req.query() and use it for
the response Content-Type header if present (fall back to
"application/octet-stream" otherwise); always set Content-Length using
binary.length (not the Gmail-provided size) to avoid stale size mismatches; keep
the existing base64 → binary decoding flow and error handling using the existing
variables (data, size, base64, binary) and ensure the final Response uses the
chosen content type and binary.length for Content-Length.
- Around line 131-139: The current hard-coded exportMimeType = "application/pdf"
in the download URL branch loses fidelity for Google Sheets/Slides; update the
route to read an optional query parameter (e.g., req.query.exportMimeType) and,
if absent, choose a sensible default based on the source mimeType: for Google
Docs keep application/pdf, for Google Sheets use
application/vnd.openxmlformats-officedocument.spreadsheetml.sheet, for Google
Slides use
application/vnd.openxmlformats-officedocument.presentationml.presentation (fall
back to application/pdf for unknown native types). Ensure you URL-encode the
chosen exportMimeType with encodeURIComponent before building downloadUrl and
reference googleNativeTypes, exportMimeType, encodedFileId and DRIVE_API when
making the change; optionally validate the provided query value before using it.
In `@src/services/jwt-helper.js`:
- Around line 45-60: importPrivateKey currently strips both PKCS#8 and PKCS#1
PEM headers but always calls crypto.subtle.importKey("pkcs8"), which causes a
confusing DataError if a PKCS#1 ("-----BEGIN RSA PRIVATE KEY-----") key is
passed; update importPrivateKey to inspect the original PEM header (look for
"-----BEGIN RSA PRIVATE KEY-----" vs "-----BEGIN PRIVATE KEY-----") and if the
RSA/PKCS#1 header is present throw a clear error (e.g., "PKCS#1 (RSA PRIVATE
KEY) is not supported; please provide a PKCS#8 PRIVATE KEY or convert it") while
only stripping and importing when the PKCS#8 header is detected, leaving the
importKey call and parameters on the existing RSASSA-PKCS1-v1_5/SHA-256 usage
unchanged.
- Line 41: Replace the unsafe spread usage in the base64url helper (the line
using String.fromCharCode(...data)) with a chunked conversion to avoid exceeding
the JS argument limit: iterate over the Uint8Array in safe-sized slices (e.g.
0x8000 bytes) and call String.fromCharCode.apply(null, slice) for each chunk,
concatenating to a single string, then call btoa on that string; update the code
in the base64url/related function to use this chunked approach instead of the
direct spread.
🪄 Autofix (Beta)
✅ Autofix completed
ℹ️ Review info
⚙️ Run configuration
Configuration used: defaults
Review profile: CHILL
Plan: Pro
Run ID: 0436d0b2-0835-47dc-8e80-cbd161d95b05
📒 Files selected for processing (3)
src/api/routes/google.jssrc/services/jwt-helper.jssrc/services/secret-rotation.js
|
Note Docstrings generation - SUCCESS |
|
Note Unit test generation is a public access feature. Expect some limitations and changes as we gather feedback and continue to improve it. Generating unit tests... This may take up to 20 minutes. |
Docstrings generation was requested by @chitcommit. * #160 (comment) The following files were modified: * `src/api/routes/google.js` * `src/services/jwt-helper.js`
Co-authored-by: Copilot <175728472+Copilot@users.noreply.github.com>
Co-authored-by: Copilot <175728472+Copilot@users.noreply.github.com>
Co-authored-by: Copilot <175728472+Copilot@users.noreply.github.com>
Forwards the upstream Content-Disposition header so downloaders receive a sensible filename. Content-Length conditional check was already applied by prior autofix; only Content-Disposition propagation was missing. Co-authored-by: @chitcommit <chitcommit@users.noreply.github.com>
|
|
Note Autofix is a beta feature. Expect some limitations and changes as we gather feedback and continue to improve it. Fixes Applied SuccessfullyFixed 3 file(s) based on 2 unresolved review comments. Files modified:
Commit: The changes have been pushed to the Time taken: |
Docstrings generation was requested by @chitcommit. * #160 (comment) The following files were modified: * `src/api/routes/google.js` * `src/services/jwt-helper.js`
|
✅ Created PR with unit tests: #164 |
Fixed 3 file(s) based on 2 unresolved review comments. Co-authored-by: CodeRabbit <noreply@coderabbit.ai>
There was a problem hiding this comment.
Actionable comments posted: 2
🧹 Nitpick comments (1)
src/api/routes/google.js (1)
253-258: Optional: surface a filename and use the actual decoded length.The attachment response advertises
application/octet-streamwith noContent-Disposition, so downstream consumers (chittystorage, chittyevidence intake) have no filename to persist and must round-trip to the message payload'sfilenameto find one. Since this route already hasmessageId+attachmentId, a caller-suppliedfilenamequery param (or one derived from the message parts via an extra lookup) would remove that round-trip.Also minor:
size || binary.lengthis belt-and-suspenders —sizefrom Gmail is the decoded byte length, so it should always equalbinary.length. Preferringbinary.lengthavoids a mismatch if the upstream ever drifts.♻️ Proposed fix
- const { data, size } = result.data; + const { data } = result.data; if (!data) return c.json({ error: "No attachment data returned" }, 404); @@ - return new Response(binary, { - headers: { - "Content-Type": "application/octet-stream", - "Content-Length": String(size || binary.length), - }, - }); + const filename = c.req.query("filename"); + const headers = { + "Content-Type": "application/octet-stream", + "Content-Length": String(binary.length), + }; + if (filename) { + headers["Content-Disposition"] = `attachment; filename="${filename.replace(/"/g, "")}"`; + } + return new Response(binary, { headers });🤖 Prompt for AI Agents
Verify each finding against the current code and only fix it if needed. In `@src/api/routes/google.js` around lines 253 - 258, The response currently returns binary via new Response(...) without a Content-Disposition and uses size || binary.length; update the route handler in src/api/routes/google.js to: read a caller-supplied filename query param (e.g., ?filename=...) and, if present, add a header "Content-Disposition" with a sensible attachment filename (attachment; filename="...") so downstream consumers can persist files without extra lookups, and set "Content-Length" to the actual decoded byte length (prefer binary.length or Buffer.byteLength(binary) instead of size || binary.length) to avoid mismatch; ensure the new headers replace the existing headers object around the new Response(...) call.
🤖 Prompt for all review comments with AI agents
Verify each finding against the current code and only fix it if needed.
Inline comments:
In `@src/api/routes/google.js`:
- Around line 138-152: The google file-download branch currently only treats
Docs/Sheets/Slides as Google-native; update the googleNativeTypes array to
include additional native MIME types ("application/vnd.google-apps.drawing",
"application/vnd.google-apps.form", "application/vnd.google-apps.script",
"application/vnd.google-apps.jam") so they use the export endpoint instead of
alt=media, and modify the export logic to accept an optional exportMimeType
query parameter (fallback to "application/pdf") before encoding/exporting;
ensure the code that builds downloadUrl (using DRIVE_API, encodedFileId and
exportMimeType) uses the provided exportMimeType for spreadsheets when present
(e.g., XLSX) to avoid lossy defaults.
- Around line 119-177: The metadata and download fetches inside the
googleRoutes.get handler (the calls that set metadataResponse and response using
fetch with DRIVE_API and token) are unguarded and will throw on network/DNS
errors; wrap both upstream fetches (the metadata fetch that produces
metadata/mimeType and the download fetch using downloadUrl) in try/catch blocks
similar to the existing googleProxy guard so transient network errors are
caught, logged, and return a controlled JSON error (e.g., 503 with a clear
message) instead of bubbling as an unhandled 500; ensure you still check .ok on
responses inside the try and preserve returning the Response(response.body, {
headers }) when successful.
---
Nitpick comments:
In `@src/api/routes/google.js`:
- Around line 253-258: The response currently returns binary via new
Response(...) without a Content-Disposition and uses size || binary.length;
update the route handler in src/api/routes/google.js to: read a caller-supplied
filename query param (e.g., ?filename=...) and, if present, add a header
"Content-Disposition" with a sensible attachment filename (attachment;
filename="...") so downstream consumers can persist files without extra lookups,
and set "Content-Length" to the actual decoded byte length (prefer binary.length
or Buffer.byteLength(binary) instead of size || binary.length) to avoid
mismatch; ensure the new headers replace the existing headers object around the
new Response(...) call.
🪄 Autofix (Beta)
❌ Autofix failed (check again to retry)
Fix all unresolved CodeRabbit comments on this PR:
- Push a commit to this branch (recommended)
- Create a new PR with the fixes
|
Note Unit test generation is a public access feature. Expect some limitations and changes as we gather feedback and continue to improve it. Generating unit tests... This may take up to 20 minutes. |
|
Note Autofix is a beta feature. Expect some limitations and changes as we gather feedback and continue to improve it. The branch was updated while autofix was in progress. Please try again. |
|
✅ Unit tests committed locally. Commit: |
|
Note Autofix is a beta feature. Expect some limitations and changes as we gather feedback and continue to improve it. Autofix skipped. No unresolved CodeRabbit review comments with fix instructions found. |
|
Note Autofix is a beta feature. Expect some limitations and changes as we gather feedback and continue to improve it. Autofix skipped. No unresolved CodeRabbit review comments with fix instructions found. |
Both this PR branch and PR #160 independently added tests/api/google-routes.test.js and tests/services/jwt-helper.test.js. Resolving conflicts by adopting the more comprehensive versions from origin/main (52 vs 41 tests for google-routes; real RSA crypto tests vs mocked for jwt-helper). The unique secret-rotation-gdrive.test.js remains as this PR's sole new contribution. Co-authored-by: @chitcommit <chitcommit@users.noreply.github.com>
Summary
Adds
/api/google/gdrive/*and/api/google/email/*proxy routes to ChittyConnect, unblocking chittystorage source inventory and chittyevidence-db Gmail backfill.Drive endpoints
GET /api/google/gdrive/files— list files (q, fields, pageSize, pageToken)GET /api/google/gdrive/files/:id— file metadataGET /api/google/gdrive/files/:id/content— download file bytes (streams)Gmail endpoints
GET /api/google/email/messages— list messages (q, maxResults, pageToken)GET /api/google/email/messages/:id— get message (format: full/metadata/minimal/raw)GET /api/google/email/messages/:id/attachments/:aid— download attachment (base64url decode → binary)Auth
Token priority: KV rotation cache (
secret:gdrive:access_token, rotated every 50 min) → 1Password broker →GOOGLE_ACCESS_TOKENenv var. Same credential path as existing/api/thirdparty/google/calendar/*.Why
/api/inventory/gdriveand/api/inventory/gmailneed these endpointsemailFetcherandgdriveFetcheruseCHITTYCONNECT_URL/gdrive/files/*andCHITTYCONNECT_URL/email/messages/*Test plan
GET /api/google/gdrive/files?q='root' in parents&pageSize=5— verify Drive listingGET /api/google/email/messages?q=has:attachment&maxResults=1— verify Gmail listing/api/inventory/gmailwithmaxMessages=5— verify end-to-end🤖 Generated with Claude Code
Summary by CodeRabbit
New Features
Improvements