Summary
Where does tool parameters JSON Schema validation happen, and how deep does it go? This affects consumer architecture decisions for typed tools that aim to use the schema as the source of truth for input correctness.
Specific questions
-
Where does validation happen? The Node SDK (nodejs/src/client.ts) passes parameters verbatim to RPC — no client-side validation, no JSON Schema validator imported. Validation, if any, must happen in the CLI binary or at the model layer. Where exactly?
-
How deep does it go? For a schema like:
{
"type": "object",
"required": ["topIssues", "summary"],
"additionalProperties": false,
"properties": {
"topIssues": {
"type": "array",
"items": { "type": "object", "required": ["issueId", "severity"], "properties": {} }
},
"summary": { "type": "string" }
}
}
Does the runtime reject:
- Wrong top-level key? (
{top_issues: [...]} instead of {topIssues: [...]})
- Missing top-level required field? (
{topIssues: [...]} missing summary)
- Extra top-level key when
additionalProperties: false?
- Wrong item-level shape? (
topIssues: [{title: "foo"}] — missing required issueId and severity)
- Wrong nested type?
-
What happens on failure? Does the orchestrator get re-prompted with the validation error so it can self-correct? Or does the tool call just fail silently? The TaskCompleteData.success field has a comment "False when validation failed (e.g., invalid arguments)" suggesting validation surfaces somewhere, but the consumer-visible behavior isn't documented.
Why this matters
Consumers migrating from generic writeArtifact({path, content: "..."}) patterns to typed tools depend on what the boundary enforces. If item-level shape IS validated, the LLM physically cannot return a malformed inner structure. If it isn't, handler-side item-level checks remain necessary and the "typed tool" claim is partial.
For us specifically: we have ~6 LLM-driven agents currently writing terminal artifacts through a generic writeArtifact shape, with schema living only in the prompt. We see ~30-40% drift to wrong shapes (snake_case vs camelCase, renamed wrappers, etc.) which costs $15-30 per wasted phase. Migrating to typed phase tools is on the roadmap; the value of that migration depends on the answers above.
Evidence (SDK source — partial answer)
nodejs/src/client.ts: tool parameters are converted via toJsonSchema() (which calls parameters.toJSONSchema() for Zod schemas or passes JSON Schema objects through unchanged). No client-side validation. No JSON Schema validator (ajv, zod-validate, etc.) is imported.
Confirmed: the Node SDK does no validation. Validation, if any, is entirely in the CLI binary (which is not source-readable).
What we'd like
A docs page or release note clarifying: (a) where validation happens (SDK / CLI / model layer), (b) what shape it covers (top-level only / item-level / arbitrarily deep), (c) what happens on failure (re-prompt / silent / error).
If item-level isn't validated today, that promotes this from a question to a feature request: deep JSON Schema enforcement against parameters.
Environment
- SDK: @github/copilot-sdk@0.3.0
- CLI: @github/copilot@1.0.45
- Node: 22 LTS
- OS: Windows 11
- Model: claude-sonnet-4-6
Summary
Where does tool
parametersJSON Schema validation happen, and how deep does it go? This affects consumer architecture decisions for typed tools that aim to use the schema as the source of truth for input correctness.Specific questions
Where does validation happen? The Node SDK (
nodejs/src/client.ts) passesparametersverbatim to RPC — no client-side validation, no JSON Schema validator imported. Validation, if any, must happen in the CLI binary or at the model layer. Where exactly?How deep does it go? For a schema like:
{ "type": "object", "required": ["topIssues", "summary"], "additionalProperties": false, "properties": { "topIssues": { "type": "array", "items": { "type": "object", "required": ["issueId", "severity"], "properties": {} } }, "summary": { "type": "string" } } }Does the runtime reject:
{top_issues: [...]}instead of{topIssues: [...]}){topIssues: [...]}missingsummary)additionalProperties: false?topIssues: [{title: "foo"}]— missing requiredissueIdandseverity)What happens on failure? Does the orchestrator get re-prompted with the validation error so it can self-correct? Or does the tool call just fail silently? The
TaskCompleteData.successfield has a comment "False when validation failed (e.g., invalid arguments)" suggesting validation surfaces somewhere, but the consumer-visible behavior isn't documented.Why this matters
Consumers migrating from generic
writeArtifact({path, content: "..."})patterns to typed tools depend on what the boundary enforces. If item-level shape IS validated, the LLM physically cannot return a malformed inner structure. If it isn't, handler-side item-level checks remain necessary and the "typed tool" claim is partial.For us specifically: we have ~6 LLM-driven agents currently writing terminal artifacts through a generic
writeArtifactshape, with schema living only in the prompt. We see ~30-40% drift to wrong shapes (snake_case vs camelCase, renamed wrappers, etc.) which costs $15-30 per wasted phase. Migrating to typed phase tools is on the roadmap; the value of that migration depends on the answers above.Evidence (SDK source — partial answer)
nodejs/src/client.ts: tool parameters are converted viatoJsonSchema()(which callsparameters.toJSONSchema()for Zod schemas or passes JSON Schema objects through unchanged). No client-side validation. No JSON Schema validator (ajv, zod-validate, etc.) is imported.Confirmed: the Node SDK does no validation. Validation, if any, is entirely in the CLI binary (which is not source-readable).
What we'd like
A docs page or release note clarifying: (a) where validation happens (SDK / CLI / model layer), (b) what shape it covers (top-level only / item-level / arbitrarily deep), (c) what happens on failure (re-prompt / silent / error).
If item-level isn't validated today, that promotes this from a question to a feature request: deep JSON Schema enforcement against
parameters.Environment