copilot_mcp_server_name field leaks into tools[] in outbound chat-completion requests, breaking strict OpenAI-compatible providers (e.g. Gemini)
TL;DR
@github/copilot@1.0.34 (and @github/copilot-sdk@0.2.2 when it spawns the bundled CLI) attaches a non-standard copilot_mcp_server_name field next to each MCP tool in the tools array it sends to the model provider. OpenAI silently ignores unknown fields, so nobody notices. But strict OpenAI-compatible endpoints — notably Google Gemini's generativelanguage.googleapis.com/v1beta/openai — reject the whole request with 400 Bad Request. The upstream error body is consumed by the CLI's retry logic, so the user only sees CAPIError: 400 400 status code (no body) with no hint of the real cause.
Environment
@github/copilot: 1.0.34
@github/copilot-sdk: 0.2.2
- Node.js:
v24.12.0
- OS: macOS (Darwin 24.6.0)
- Provider: BYOK,
type: "openai", baseUrl: "https://generativelanguage.googleapis.com/v1beta/openai"
- Model:
gemini-2.5-flash-lite
- MCP server: remote HTTP, tools exposed via
tools: ["*"]
Reproduction
Minimal SDK-driven session with any BYOK OpenAI-compatible provider that does strict schema validation and at least one MCP server:
import { CopilotClient, approveAll } from "@github/copilot-sdk";
const client = new CopilotClient();
const session = await client.createSession({
onPermissionRequest: approveAll,
model: "gemini-2.5-flash-lite",
provider: {
type: "openai",
baseUrl: "https://generativelanguage.googleapis.com/v1beta/openai",
apiKey: process.env.GEMINI_API_KEY!,
},
mcpServers: {
demo: {
type: "http",
url: "<any remote MCP with a handful of tools>",
tools: ["*"],
headers: { Authorization: `Bearer ${process.env.MCP_TOKEN}` },
},
},
tools: [],
});
await session.sendAndWait({ prompt: "hi" });
Expected
A normal chat turn. The same request, sent directly to Gemini's endpoint with the same 72 tools but without copilot_mcp_server_name, returns 200 and a correct tool_calls response.
Actual
Three retries, then the session surfaces:
CAPIError: 400 400 status code (no body)
Root cause
Captured the outbound request from the CLI subprocess via a localhost reverse proxy. Each MCP-backed tool in the tools array has an extra sibling field copilot_mcp_server_name:
Gemini's OpenAI-compat endpoint responds (the body the CLI retries-then-drops):
HTTP/1.1 400 Bad Request
{
"error": {
"code": 400,
"message": "Invalid JSON payload received. Unknown name \"copilot_mcp_server_name\" at 'tools[17]': Cannot find field.
Invalid JSON payload received. Unknown name \"copilot_mcp_server_name\" at 'tools[18]': Cannot find field.
…
"
}
}
Verified by diffing:
- 72 tools including
copilot_mcp_server_name → 400 (CLI's payload as-is)
- Same 72 tools with
copilot_mcp_server_name stripped → 200, correct tool call returned
So the non-standard field is the sole cause.
The field appears to be added for CLI-internal bookkeeping (routing tool calls back to the owning MCP client). Evidence the field is seen by the same codebase (@github/copilot@1.0.34, app.js):
"copilot_mcp_server_name": "demo" appears 72× in Tools: debug dump.
- Corresponding builtin tools (
bash, edit, …) do not carry this field — only MCP-backed ones do.
Why the error is opaque to users
Three compounding issues make this hard to diagnose:
- The CLI's retry wrapper replaces the upstream response body with the literal string
"400 status code (no body)" before surfacing CAPIError. The actual upstream body (which does name the offending field) never reaches the user or the session.error event.
session.create with a valid provider.baseUrl config still triggers a misleading warning:
[WARNING] Found COPILOT_PROVIDER_TYPE without COPILOT_PROVIDER_BASE_URL.
Provider configuration will be ignored. Set COPILOT_PROVIDER_BASE_URL to enable BYOK mode.
This is read from environment variables and is unrelated to the RPC-supplied provider, but it misdirects debugging toward provider config.
- The request body itself is never logged at
--log-level debug; only Client options and Request options metadata are. Reproducing the failing payload requires a MITM proxy.
Suggested fix
Primary: do not emit any non-standard keys at the tool-entry level. Keep copilot_mcp_server_name in an internal side-table keyed by tool.function.name (or similar) so the serializer sent to the provider is a plain OpenAI ChatCompletionTool[].
Sketch (search app.js for the serializer that prepares the tools array before openai.chat.completions.create):
// BEFORE
const tools = allTools.map(t => ({
type: "function",
function: t.function,
copilot_mcp_server_name: t.mcpServerName, // leaks into HTTP body
}));
// AFTER
const tools = allTools.map(t => ({ type: "function", function: t.function }));
const mcpOwnership = new Map(
allTools
.filter(t => t.mcpServerName)
.map(t => [t.function.name, t.mcpServerName]),
);
// use mcpOwnership for routing tool_calls responses back to the MCP client
Secondary (defensive): before each chat.completions.create, strip any non-OpenAI fields at the tool level (allowlist = { type, function }). Cheap and makes the shim robust against future bookkeeping additions.
Tertiary (observability): when CAPIError wraps a 400, include upstreamResponseBody on the error and forward it in the session.error event's message. The current "no body" fallback should only fire when the upstream body is genuinely empty — not when it was simply not captured.
Workaround (until fixed)
A local reverse proxy between the CLI and the provider that strips copilot_mcp_server_name from every tool entry before forwarding. Full working harness and sanitizing proxy: src/sanitizing-proxy.ts. Toggle via COPILOT_SANITIZE_TOOLS=true (default in this repo).
The core of the sanitizer is ~5 lines:
if (Array.isArray(body.tools)) {
body.tools = body.tools.map(({ copilot_mcp_server_name, ...clean }) => clean);
}
With the sanitizer in place, the exact same SDK config returns a valid tool_calls response against Gemini and the agentic loop proceeds normally. Without it, every turn fails with 400 (no body).
Related
Likely affects any strict OpenAI-compatible provider that rejects unknown fields:
- Google Gemini (
generativelanguage.googleapis.com/v1beta/openai) — confirmed
- Probably: Azure OpenAI with strict mode, some self-hosted vLLM / Ollama gateways, locally-served OpenAI shims
Does not affect:
- Official OpenAI API (ignores unknown fields silently)
- Anthropic (separate
type: "anthropic" wire format, different code path)
- GitHub's own Copilot API (not BYOK)
copilot_mcp_server_namefield leaks intotools[]in outbound chat-completion requests, breaking strict OpenAI-compatible providers (e.g. Gemini)TL;DR
@github/copilot@1.0.34(and@github/copilot-sdk@0.2.2when it spawns the bundled CLI) attaches a non-standardcopilot_mcp_server_namefield next to each MCP tool in thetoolsarray it sends to the model provider. OpenAI silently ignores unknown fields, so nobody notices. But strict OpenAI-compatible endpoints — notably Google Gemini'sgenerativelanguage.googleapis.com/v1beta/openai— reject the whole request with400 Bad Request. The upstream error body is consumed by the CLI's retry logic, so the user only seesCAPIError: 400 400 status code (no body)with no hint of the real cause.Environment
@github/copilot:1.0.34@github/copilot-sdk:0.2.2v24.12.0type: "openai",baseUrl: "https://generativelanguage.googleapis.com/v1beta/openai"gemini-2.5-flash-litetools: ["*"]Reproduction
Minimal SDK-driven session with any BYOK OpenAI-compatible provider that does strict schema validation and at least one MCP server:
Expected
A normal chat turn. The same request, sent directly to Gemini's endpoint with the same 72 tools but without
copilot_mcp_server_name, returns200and a correcttool_callsresponse.Actual
Three retries, then the session surfaces:
Root cause
Captured the outbound request from the CLI subprocess via a localhost reverse proxy. Each MCP-backed tool in the
toolsarray has an extra sibling fieldcopilot_mcp_server_name:{ "type": "function", "function": { "name": "demo-list_companies", "description": "…", "parameters": { "type": "object", "properties": {} } }, "copilot_mcp_server_name": "demo" // ← non-standard, at tool level (not inside function) }Gemini's OpenAI-compat endpoint responds (the body the CLI retries-then-drops):
Verified by diffing:
copilot_mcp_server_name→ 400 (CLI's payload as-is)copilot_mcp_server_namestripped → 200, correct tool call returnedSo the non-standard field is the sole cause.
The field appears to be added for CLI-internal bookkeeping (routing tool calls back to the owning MCP client). Evidence the field is seen by the same codebase (
@github/copilot@1.0.34,app.js):"copilot_mcp_server_name": "demo"appears 72× inTools:debug dump.bash,edit, …) do not carry this field — only MCP-backed ones do.Why the error is opaque to users
Three compounding issues make this hard to diagnose:
"400 status code (no body)"before surfacingCAPIError. The actual upstream body (which does name the offending field) never reaches the user or thesession.errorevent.session.createwith a validprovider.baseUrlconfig still triggers a misleading warning:--log-level debug; onlyClient optionsandRequest optionsmetadata are. Reproducing the failing payload requires a MITM proxy.Suggested fix
Primary: do not emit any non-standard keys at the tool-entry level. Keep
copilot_mcp_server_namein an internal side-table keyed bytool.function.name(or similar) so the serializer sent to the provider is a plain OpenAIChatCompletionTool[].Sketch (search
app.jsfor the serializer that prepares thetoolsarray beforeopenai.chat.completions.create):Secondary (defensive): before each
chat.completions.create, strip any non-OpenAI fields at the tool level (allowlist = { type, function }). Cheap and makes the shim robust against future bookkeeping additions.Tertiary (observability): when
CAPIErrorwraps a400, includeupstreamResponseBodyon the error and forward it in thesession.errorevent'smessage. The current "no body" fallback should only fire when the upstream body is genuinely empty — not when it was simply not captured.Workaround (until fixed)
A local reverse proxy between the CLI and the provider that strips
copilot_mcp_server_namefrom every tool entry before forwarding. Full working harness and sanitizing proxy:src/sanitizing-proxy.ts. Toggle viaCOPILOT_SANITIZE_TOOLS=true(default in this repo).The core of the sanitizer is ~5 lines:
With the sanitizer in place, the exact same SDK config returns a valid
tool_callsresponse against Gemini and the agentic loop proceeds normally. Without it, every turn fails with400 (no body).Related
Likely affects any strict OpenAI-compatible provider that rejects unknown fields:
generativelanguage.googleapis.com/v1beta/openai) — confirmedDoes not affect:
type: "anthropic"wire format, different code path)