fix: require service auth for Ledger/Contextual tools and harden error handling#39
Conversation
…error handling Add requireServiceAuth gating to all 14 Ledger and 2 Contextual tool handlers so no unauthenticated requests reach downstream services. Add checkFetchError to 5 fact governance POST handlers that previously only caught non-JSON errors, silently passing 4xx/5xx JSON error bodies as success. Fix the last remaining isError-missing catch block in chitty_fact_export JSON branch. Add 9 new tests covering auth failure isolation, header propagation, and error response format. Co-Authored-By: Claude Opus 4.6 <noreply@anthropic.com>
Co-Authored-By: Claude Opus 4.6 <noreply@anthropic.com>
… coverage - Extract parseJsonBody helper, eliminating 13 duplicated try/catch blocks (-133 lines) - Migrate 4 inline !response.ok handlers to shared checkFetchError - Add try-catch in requireServiceAuth for getServiceToken exceptions - Fix evidence JSON parse in fact_mint: block minting on invalid JSON instead of silent null hash - Fix PDF export path: use parseJsonBody instead of unguarded factResp.json() - Simplify redundant else-if(!env.PROOF_Q) to else - Consolidate 9 auth failure tests into it.each (+3 new: validate, dispute, export) - Add tests: getServiceToken throw, non-JSON response, seal+PROOF_Q interaction, evidence JSON parse 177/177 tests pass. Co-Authored-By: Claude Opus 4.6 <noreply@anthropic.com>
Deploying with
|
| Status | Name | Latest Commit | Updated (UTC) |
|---|---|---|---|
| ✅ Deployment successful! View logs |
chittyconnect | 82fc75b | Feb 26 2026, 11:41 AM |
|
You have reached your Codex usage limits for code reviews. You can see your limits in the Codex usage dashboard. |
|
Warning Rate limit exceeded
⌛ How to resolve this issue?After the wait time has elapsed, a review can be triggered using the We recommend that you space out your commits to avoid hitting the rate limit. 🚦 How do rate limits work?CodeRabbit enforces hourly rate limits for each developer per organization. Our paid plans have higher rate limits than the trial, open-source and free plans. In all cases, we re-allow further reviews after a brief timeout. Please see our FAQ for further information. 📒 Files selected for processing (2)
📝 WalkthroughWalkthroughUpdates MCP tool dispatcher to standardize JSON parsing error handling via new Changes
Sequence Diagram(s)sequenceDiagram
participant Client
participant ToolDispatcher as Tool Dispatcher
participant ServiceAuth as Service Auth
participant APIServer as API Server
Client->>ToolDispatcher: dispatchToolCall(toolName, args)
ToolDispatcher->>ServiceAuth: requireServiceAuth()
alt Auth Token Available
ServiceAuth-->>ToolDispatcher: serviceToken
else Auth Error
ServiceAuth->>ServiceAuth: catch & log error
ServiceAuth-->>ToolDispatcher: MCP error response
ToolDispatcher-->>Client: authentication failure
end
alt Token Retrieved Successfully
ToolDispatcher->>APIServer: fetch(endpoint, {headers: Authorization: Bearer svc-token})
APIServer-->>ToolDispatcher: HTTP response
alt Response OK
ToolDispatcher->>ToolDispatcher: parseJsonBody(response)
alt JSON Parse Success
ToolDispatcher-->>Client: structured data
else JSON Parse Failure
ToolDispatcher-->>Client: MCP error (parse failed)
end
else Response Error
ToolDispatcher-->>Client: MCP error (HTTP error)
end
end
Estimated code review effort🎯 3 (Moderate) | ⏱️ ~22 minutes Possibly related PRs
Suggested labels
Poem
🚥 Pre-merge checks | ✅ 2 | ❌ 1❌ Failed checks (1 warning)
✅ Passed checks (2 passed)
✏️ Tip: You can configure your own custom pre-merge checks in the settings. ✨ Finishing Touches🧪 Generate unit tests (beta)
Thanks for using CodeRabbit! It's free for OSS, and your support helps us grow. If you like it, consider giving us a shout-out. Comment |
There was a problem hiding this comment.
Caution
Some comments are outside the diff and can’t be posted inline due to platform limitations.
⚠️ Outside diff range comments (1)
src/mcp/tool-dispatcher.js (1)
1-10:⚠️ Potential issue | 🟡 MinorAddress Prettier formatting issues flagged by CI.
The pipeline detected formatting inconsistencies. Run
npx prettier --write src/mcp/tool-dispatcher.jsto fix.🤖 Prompt for AI Agents
Verify each finding against the current code and only fix it if needed. In `@src/mcp/tool-dispatcher.js` around lines 1 - 10, The file mcp/tool-dispatcher has Prettier formatting inconsistencies; run the formatter or apply Prettier rules to normalize whitespace and line breaks (e.g., run npx prettier --write on the module containing the Shared MCP Tool Dispatcher and the import of getServiceToken) so the file matches the project's Prettier config and CI will pass.
🧹 Nitpick comments (1)
src/mcp/tool-dispatcher.js (1)
885-893: Consider extendingparseJsonBodyto Finance tools for consistency.Finance handlers (lines 893, 903, 916, 929, 943, 962, 975, 988, 1003) still use direct
.json()calls without theparseJsonBodyerror handling added to Ledger/Contextual tools. Non-JSON responses would throw unhandled exceptions here.This is outside the stated PR scope but worth addressing for consistency in a follow-up.
🤖 Prompt for AI Agents
Verify each finding against the current code and only fix it if needed. In `@src/mcp/tool-dispatcher.js` around lines 885 - 893, In the Finance branches (e.g., the handler when name === "chitty_finance_entities") replace direct response.json() calls with the existing parseJsonBody helper to avoid uncaught exceptions on non-JSON responses: after checkFetchError(...) call, call parseJsonBody(response, "ChittyFinance") and handle its return the same way Ledger/Contextual handlers do (check for an error return and return it if present, otherwise assign the parsed result to result). Use the same helper names present in this file (requireServiceAuth, checkFetchError, parseJsonBody) so all Finance tool handlers follow the same error-safe JSON parsing pattern.
🤖 Prompt for all review comments with AI agents
Verify each finding against the current code and only fix it if needed.
Outside diff comments:
In `@src/mcp/tool-dispatcher.js`:
- Around line 1-10: The file mcp/tool-dispatcher has Prettier formatting
inconsistencies; run the formatter or apply Prettier rules to normalize
whitespace and line breaks (e.g., run npx prettier --write on the module
containing the Shared MCP Tool Dispatcher and the import of getServiceToken) so
the file matches the project's Prettier config and CI will pass.
---
Nitpick comments:
In `@src/mcp/tool-dispatcher.js`:
- Around line 885-893: In the Finance branches (e.g., the handler when name ===
"chitty_finance_entities") replace direct response.json() calls with the
existing parseJsonBody helper to avoid uncaught exceptions on non-JSON
responses: after checkFetchError(...) call, call parseJsonBody(response,
"ChittyFinance") and handle its return the same way Ledger/Contextual handlers
do (check for an error return and return it if present, otherwise assign the
parsed result to result). Use the same helper names present in this file
(requireServiceAuth, checkFetchError, parseJsonBody) so all Finance tool
handlers follow the same error-safe JSON parsing pattern.
There was a problem hiding this comment.
Pull request overview
This PR hardens MCP tool handlers that proxy to downstream Chitty services by enforcing service-token auth and standardizing error/JSON handling, plus a small docs frontmatter correction.
Changes:
- Require service-auth (
requireServiceAuth) for ChittyLedger and ChittyContextual tool handlers and add tests ensuring no downstream fetch occurs when token retrieval fails. - Deduplicate response parsing via
parseJsonBodyand add consistentcheckFetchErrorguarding for non-OK upstream responses. - Update CHITTY.md frontmatter classification (
summary→architecture) and URI path.
Reviewed changes
Copilot reviewed 3 out of 3 changed files in this pull request and generated 4 comments.
| File | Description |
|---|---|
src/mcp/tool-dispatcher.js |
Adds requireServiceAuth enforcement and centralized fetch error + JSON parsing helpers across Ledger/Contextual tools. |
tests/mcp/tool-dispatcher.test.js |
Updates expectations for auth headers/errors and adds isolation tests for missing/throwing service-token scenarios. |
CHITTY.md |
Corrects canon frontmatter uri path segment and type. |
💡 Add Copilot custom instructions for smarter, more guided reviews. Learn how to get started.
| return { | ||
| error: { | ||
| content: [ | ||
| { | ||
| type: "text", | ||
| text: `Authentication failed: Unable to retrieve service token for ${displayName} (${err.message})`, |
There was a problem hiding this comment.
In requireServiceAuth, the catch block interpolates err.message. If getServiceToken rejects with a non-Error (or null/undefined), accessing .message can itself throw and bypass the intended structured auth error. Consider normalizing the message (e.g., err instanceof Error ? err.message : String(err)) before building the response text.
| return { | |
| error: { | |
| content: [ | |
| { | |
| type: "text", | |
| text: `Authentication failed: Unable to retrieve service token for ${displayName} (${err.message})`, | |
| const safeMessage = err instanceof Error ? err.message : String(err); | |
| return { | |
| error: { | |
| content: [ | |
| { | |
| type: "text", | |
| text: `Authentication failed: Unable to retrieve service token for ${displayName} (${safeMessage})`, |
| const text = await response.text(); | ||
| try { | ||
| return { parsed: JSON.parse(text), error: null }; | ||
| } catch { | ||
| return { | ||
| parsed: null, | ||
| error: { | ||
| content: [ | ||
| { | ||
| type: "text", | ||
| text: `${label} returned non-JSON (${response.status}): ${text.slice(0, 200)}`, |
There was a problem hiding this comment.
parseJsonBody does const text = await response.text(); outside the try/catch. If response.text() rejects (stream/body read error), it will throw up to the outer dispatcher catch and return the generic Error executing ... instead of the intended structured MCP parse error. Wrap the response.text() read in the try/catch (or handle its rejection) so non-readable bodies still produce a consistent MCP error response.
| const text = await response.text(); | |
| try { | |
| return { parsed: JSON.parse(text), error: null }; | |
| } catch { | |
| return { | |
| parsed: null, | |
| error: { | |
| content: [ | |
| { | |
| type: "text", | |
| text: `${label} returned non-JSON (${response.status}): ${text.slice(0, 200)}`, | |
| let text; | |
| try { | |
| text = await response.text(); | |
| return { parsed: JSON.parse(text), error: null }; | |
| } catch { | |
| const preview = | |
| typeof text === "string" && text.length > 0 | |
| ? text.slice(0, 200) | |
| : "Unable to read or parse response body"; | |
| return { | |
| parsed: null, | |
| error: { | |
| content: [ | |
| { | |
| type: "text", | |
| text: `${label} returned non-JSON (${response.status}): ${preview}`, |
| // Fetch fact with proof data | ||
| const factResp = await fetch( | ||
| `https://ledger.chitty.cc/api/facts/${encodeURIComponent(args.fact_id)}/export`, | ||
| { headers: ledgerAuth }, | ||
| ); | ||
| if (!factResp.ok) { | ||
| return { |
There was a problem hiding this comment.
In the PDF export path, the non-OK handling for factResp treats any failure as “fact not found” (even for 5xx/timeouts). This can be misleading and makes diagnosing upstream issues harder. Consider reusing checkFetchError(factResp, "ChittyLedger") (or otherwise surfacing status/body) so non-404 errors are reported accurately.
| const evidenceCheck = await fetch( | ||
| `https://ledger.chitty.cc/api/evidence/${encodeURIComponent(args.evidence_id)}`, | ||
| { headers: ledgerAuth }, | ||
| ); | ||
| if (!evidenceCheck.ok) { | ||
| return { | ||
| content: [ | ||
| { | ||
| type: "text", | ||
| text: `ChittyLedger error (${response.status}): ${await response.text().catch(() => "")}`.slice( | ||
| 0, | ||
| 300, | ||
| ), | ||
| text: `Fact minting blocked: evidence_id "${args.evidence_id}" not found in ChittyLedger (${evidenceCheck.status}). Evidence must be ingested through the pipeline before facts can be minted from it.`, | ||
| }, |
There was a problem hiding this comment.
chitty_fact_mint pre-flight treats any !evidenceCheck.ok as “evidence_id not found” and instructs ingestion. For non-404 responses (e.g., 500/503), this will misreport upstream failures as missing evidence. Consider branching on evidenceCheck.status === 404 for the “not found” message, and otherwise returning a checkFetchError(evidenceCheck, "ChittyLedger")-style error.
Code ReviewOverall this is a solid security hardening pass. The auth coverage gap on 16 ChittyLedger and 2 ChittyContextual tools was a real risk and these changes close it cleanly. 177/177 tests is a good signal. What's well done
Fact seal simplification — Removing the redundant Issues and suggestions1. The diff removes const fetchErr = await checkFetchError(response, "ChittyLedger seal");
if (fetchErr) return fetchErr;
// then PROOF_Q logic2. Pre-flight evidence checks only validate HTTP status In the batch ledger validate path, pre-flight evidence checks return 3. Error message truncation at 200 chars may drop critical debug context 200 characters is tight for 4. Test gap: The PR mentions 12 auth failure isolation tests (9 original + 3 new). The new tests cover 5. Minor: CHITTY.md URI type — confirm canon registry accepts The fix from Summary
Approve pending confirmation of item #1 (seal handler guard). The rest are non-blocking suggestions. 🤖 Generated with Claude Code |
…ion tests - Extract checkAndParseJson combining checkFetchError + parseJsonBody (-21 lines) - Consistent error label naming (ChittyLedger vs Ledger) - Add 4 missing tools to auth isolation it.each (facts, contradictions, verify, statistics) - Add PROOF_Q.send throw test (catch branch) - Rename ledgerCases to serviceAuthCases for clarity - Fix it.each description interpolation 182/182 tests pass. Co-Authored-By: Claude Opus 4.6 <noreply@anthropic.com>
Code ReviewThis is a solid, well-scoped PR. The core security fix is important — 18 tool handlers were making unauthenticated requests to downstream services, and the approach to fixing this is clean and consistent. The refactoring work is a genuine improvement, not cleanup for its own sake. SecurityGood catch on the auth bypass. All 16 ChittyLedger and 2 ChittyContextual handlers now properly gate on service token availability before making any fetch. The One thing worth a comment inline: in Bug Fixes
PROOF_Q logic simplification ( Code QualityMinor inconsistency in Label inconsistency in const { parsed: factData, error: factParseErr } = await parseJsonBody(factResp, 'Ledger');This uses Breaking Change (intentional, worth noting)Several tools changed their error response format:
This is correct for MCP compliance. Any callers outside this codebase that were checking Test Coverage177/177 is good. The new tests are well-structured:
One note: the evidence invalid JSON test mocks SummaryThe security fix is real and the approach is correct. Two small items worth addressing before merge:
Neither is a blocker. The rest is well done. Generated with Claude Code |
Summary
requireServiceAuthto all 16 ChittyLedger and 2 ChittyContextual tool handlers, preventing unauthenticated requests to downstream servicesparseJsonBodyhelper to deduplicate 13 identical try/catch JSON parsing blockscheckFetchErrorguards to 5 fact governance POST handlers that previously had noresponse.okcheckrequireServiceAuthto catch thrown exceptions fromgetServiceToken(1Password timeouts, etc.)chitty_fact_mint: block minting on invalid JSON instead of silently anchoring null hashparseJsonBodyinstead of unguardedfactResp.json()summarytoarchitectureTest plan
npm test)getServiceTokenthrow test confirms structured error instead of generic catchisError: trueon HTML from ledgerproof_queue_warningin response🤖 Generated with Claude Code
Summary by CodeRabbit
Bug Fixes
Documentation
Tests