Skip to content

copilot_mcp_server_name field leaks into tools[] in outbound chat-completion requests, breaking strict OpenAI-compatible providers (e.g. Gemini) #1129

@siarheidudko

Description

@siarheidudko

copilot_mcp_server_name field leaks into tools[] in outbound chat-completion requests, breaking strict OpenAI-compatible providers (e.g. Gemini)

TL;DR

@github/copilot@1.0.34 (and @github/copilot-sdk@0.2.2 when it spawns the bundled CLI) attaches a non-standard copilot_mcp_server_name field next to each MCP tool in the tools array it sends to the model provider. OpenAI silently ignores unknown fields, so nobody notices. But strict OpenAI-compatible endpoints — notably Google Gemini's generativelanguage.googleapis.com/v1beta/openai — reject the whole request with 400 Bad Request. The upstream error body is consumed by the CLI's retry logic, so the user only sees CAPIError: 400 400 status code (no body) with no hint of the real cause.

Environment

  • @github/copilot: 1.0.34
  • @github/copilot-sdk: 0.2.2
  • Node.js: v24.12.0
  • OS: macOS (Darwin 24.6.0)
  • Provider: BYOK, type: "openai", baseUrl: "https://generativelanguage.googleapis.com/v1beta/openai"
  • Model: gemini-2.5-flash-lite
  • MCP server: remote HTTP, tools exposed via tools: ["*"]

Reproduction

Minimal SDK-driven session with any BYOK OpenAI-compatible provider that does strict schema validation and at least one MCP server:

import { CopilotClient, approveAll } from "@github/copilot-sdk";

const client = new CopilotClient();
const session = await client.createSession({
  onPermissionRequest: approveAll,
  model: "gemini-2.5-flash-lite",
  provider: {
    type: "openai",
    baseUrl: "https://generativelanguage.googleapis.com/v1beta/openai",
    apiKey: process.env.GEMINI_API_KEY!,
  },
  mcpServers: {
    demo: {
      type: "http",
      url: "<any remote MCP with a handful of tools>",
      tools: ["*"],
      headers: { Authorization: `Bearer ${process.env.MCP_TOKEN}` },
    },
  },
  tools: [],
});

await session.sendAndWait({ prompt: "hi" });

Expected

A normal chat turn. The same request, sent directly to Gemini's endpoint with the same 72 tools but without copilot_mcp_server_name, returns 200 and a correct tool_calls response.

Actual

Three retries, then the session surfaces:

CAPIError: 400 400 status code (no body)

Root cause

Captured the outbound request from the CLI subprocess via a localhost reverse proxy. Each MCP-backed tool in the tools array has an extra sibling field copilot_mcp_server_name:

{
  "type": "function",
  "function": {
    "name": "demo-list_companies",
    "description": "",
    "parameters": { "type": "object", "properties": {} }
  },
  "copilot_mcp_server_name": "demo"   // ← non-standard, at tool level (not inside function)
}

Gemini's OpenAI-compat endpoint responds (the body the CLI retries-then-drops):

HTTP/1.1 400 Bad Request

{
  "error": {
    "code": 400,
    "message": "Invalid JSON payload received. Unknown name \"copilot_mcp_server_name\" at 'tools[17]': Cannot find field.
                Invalid JSON payload received. Unknown name \"copilot_mcp_server_name\" at 'tools[18]': Cannot find field.
                …
               "
  }
}

Verified by diffing:

  1. 72 tools including copilot_mcp_server_name → 400 (CLI's payload as-is)
  2. Same 72 tools with copilot_mcp_server_name stripped → 200, correct tool call returned

So the non-standard field is the sole cause.

The field appears to be added for CLI-internal bookkeeping (routing tool calls back to the owning MCP client). Evidence the field is seen by the same codebase (@github/copilot@1.0.34, app.js):

  • "copilot_mcp_server_name": "demo" appears 72× in Tools: debug dump.
  • Corresponding builtin tools (bash, edit, …) do not carry this field — only MCP-backed ones do.

Why the error is opaque to users

Three compounding issues make this hard to diagnose:

  1. The CLI's retry wrapper replaces the upstream response body with the literal string "400 status code (no body)" before surfacing CAPIError. The actual upstream body (which does name the offending field) never reaches the user or the session.error event.
  2. session.create with a valid provider.baseUrl config still triggers a misleading warning:
    [WARNING] Found COPILOT_PROVIDER_TYPE without COPILOT_PROVIDER_BASE_URL.
              Provider configuration will be ignored. Set COPILOT_PROVIDER_BASE_URL to enable BYOK mode.
    
    This is read from environment variables and is unrelated to the RPC-supplied provider, but it misdirects debugging toward provider config.
  3. The request body itself is never logged at --log-level debug; only Client options and Request options metadata are. Reproducing the failing payload requires a MITM proxy.

Suggested fix

Primary: do not emit any non-standard keys at the tool-entry level. Keep copilot_mcp_server_name in an internal side-table keyed by tool.function.name (or similar) so the serializer sent to the provider is a plain OpenAI ChatCompletionTool[].

Sketch (search app.js for the serializer that prepares the tools array before openai.chat.completions.create):

// BEFORE
const tools = allTools.map(t => ({
  type: "function",
  function: t.function,
  copilot_mcp_server_name: t.mcpServerName, // leaks into HTTP body
}));

// AFTER
const tools = allTools.map(t => ({ type: "function", function: t.function }));
const mcpOwnership = new Map(
  allTools
    .filter(t => t.mcpServerName)
    .map(t => [t.function.name, t.mcpServerName]),
);
// use mcpOwnership for routing tool_calls responses back to the MCP client

Secondary (defensive): before each chat.completions.create, strip any non-OpenAI fields at the tool level (allowlist = { type, function }). Cheap and makes the shim robust against future bookkeeping additions.

Tertiary (observability): when CAPIError wraps a 400, include upstreamResponseBody on the error and forward it in the session.error event's message. The current "no body" fallback should only fire when the upstream body is genuinely empty — not when it was simply not captured.

Workaround (until fixed)

A local reverse proxy between the CLI and the provider that strips copilot_mcp_server_name from every tool entry before forwarding. Full working harness and sanitizing proxy: src/sanitizing-proxy.ts. Toggle via COPILOT_SANITIZE_TOOLS=true (default in this repo).

The core of the sanitizer is ~5 lines:

if (Array.isArray(body.tools)) {
  body.tools = body.tools.map(({ copilot_mcp_server_name, ...clean }) => clean);
}

With the sanitizer in place, the exact same SDK config returns a valid tool_calls response against Gemini and the agentic loop proceeds normally. Without it, every turn fails with 400 (no body).

Related

Likely affects any strict OpenAI-compatible provider that rejects unknown fields:

  • Google Gemini (generativelanguage.googleapis.com/v1beta/openai) — confirmed
  • Probably: Azure OpenAI with strict mode, some self-hosted vLLM / Ollama gateways, locally-served OpenAI shims

Does not affect:

  • Official OpenAI API (ignores unknown fields silently)
  • Anthropic (separate type: "anthropic" wire format, different code path)
  • GitHub's own Copilot API (not BYOK)

Metadata

Metadata

Assignees

No one assigned

    Labels

    Type

    No type
    No fields configured for issues without a type.

    Projects

    No projects

    Milestone

    No milestone

    Relationships

    None yet

    Development

    No branches or pull requests

    Issue actions