Skip to content

feat: add github copilot provider #230

New issue

Have a question about this project? Sign up for a free GitHub account to open an issue and contact its maintainers and the community.

By clicking “Sign up for GitHub”, you agree to our terms of service and privacy statement. We’ll occasionally send you account related emails.

Already on GitHub? Sign in to your account

Open
wants to merge 2 commits into
base: main
Choose a base branch
from

Conversation

bryanvaz
Copy link

@bryanvaz bryanvaz commented Jun 13, 2025

Adds Github Copilot as a provider.

Closes #209, #173

Notes

  • Tool use with Claude Sonnet 4 currently does not work reliably Claude 4 works with preview cavet
  • This is currently an experimental feature. It will need some time in the wild to understand what the edge cases are with the OpenCode agent prompts and the Github Copilot system prompt.
  • Currently requires the user to either already use github copilot through a sponsored plugin (gh, vscode, or nvim), or to have prior knowledge on how to obtain a github token.
  • Requires the user know how to correctly prompt the agent to ensure the responses stay within copilot's tighter output token limits on certain models

@bigbabyjack
Copy link

i will be very grateful for this

@kujtimiihoxha
Copy link
Collaborator

@bryanvaz thanks for this PR will give it a review it will be a bit slower than usual because of #228 but this is really really awesome 🙌

)

// GitHub Copilot models available through GitHub's API
var CopilotModels = map[ModelID]Model{
Copy link
Collaborator

@kujtimiihoxha kujtimiihoxha Jun 14, 2025

Choose a reason for hiding this comment

The reason will be displayed to describe this comment to others. Learn more.

One thing here, maybe we can call https://api.githubcopilot.com/models instead of hard coding these here?

All of these configurations should be way simpler than they are now, will find a better way to handle this in the future.

Copy link
Author

@bryanvaz bryanvaz Jun 14, 2025

Choose a reason for hiding this comment

The reason will be displayed to describe this comment to others. Learn more.

Yea, I added TODO comment sitting in the config.go of the PR to move to a dynamic model polling system for Copilot at least, if not all the providers; however this would require rearchitecting the entire model/provider system.

For now, I wanted to keep the scope of the PR tight so we can at least make sure the agent's prompts work with the copilot supervisor/server-side system prompt on known mainline models.

If you agree that moving to a dynamic provider definition makes sense for the future (similar how avante.nvim handles config), I'd say we should open a separate issue as it will require a fair amount of work (also to decide the amount of backward compatibility you want to support.)

Copy link
Collaborator

Choose a reason for hiding this comment

The reason will be displayed to describe this comment to others. Learn more.

Yep that makes sense for sure, I would say let's start a discussion thread on this. Would love to understand more how avante.nvin handles this, the main reason we have the setup we have today is for calculating cost and max token context.

I used to use avante before but it started behaving weird after the introduction of cursor agent mode.

@bryanvaz
Copy link
Author

bryanvaz commented Jun 14, 2025

@bryanvaz thanks for this PR will give it a review it will be a bit slower than usual because of #228 but this is really really awesome 🙌

No worries, ironically getting the copilot model files setup was 30 min task, but I had to burn a fair amount of dev hours understanding how the model/provider config system works in order to actually get opencode to not error or disable the models on boot.

Just another note, one of the my devs sent me a msg last night that their opencode agent hit some sort of prompt error with copilot half way through a task. I'll try to run it down sometime this weekend, but I suspect there are a few more prompt structures that copilot will reject, unfortunately, the only way to find them is through a larger group trying to use the tool.

@kujtimiihoxha
Copy link
Collaborator

@bryanvaz yeah the whole provider/model config is not ideal and I really want to rework that. I think the biggest issue with the github copilot models is that they are not made to be used outside their stuff but if we figure it out it will be awesome, there is a way to define special provider specific system prompts https://github.com/opencode-ai/opencode/blob/main/internal/llm/prompt/coder.go#L18 maybe tweaking this helps 🤔

@bryanvaz
Copy link
Author

Added support for claude-sonnet-4.

Support is experimental for two reasons:

  • Claude Sonnet 4 for Github Copilot is in preview and subject to change
  • The current OpenAI schema and SDK (which copilot uses under the hood) does not support parallel tool usage, however Claude 4 returns parallel tool usage with specific prompt instructions. Normally this behaviour would cause a client to error out. This PR has a monkeypatch that rerolls the tool usage instructions as the message stream comes in; therefore, if OpenAI/Github ever adds native support for parallel usage, the monkey patch will probably break.

Additional note: models endpoint reports that claude-sonnet-4 via copilot only has a 128k context window, while claude-sonnet-4-0 has a 200k context window via aws bedrock and the anthropic api. This may be changed at some point.

@kujtimiihoxha note for future

@kujtimiihoxha
Copy link
Collaborator

kujtimiihoxha commented Jun 18, 2025

@bryanvaz

The current OpenAI schema and SDK

I have a feeling I have seen 4o call multiple tools at the same time (don't use OpenAI models that much)🤔 can you maybe share how the tool call looks like from copilot ?

@kujtimiihoxha
Copy link
Collaborator

FYI I will try to dig into this PR later today, thanks again for all the work you put into this.

@bryanvaz
Copy link
Author

I've improved the logging system as part of the PR, in order to properly diagnose the issue. So when you run the PR's build use

OPENCODE_DEV_DEBUG="true" opencode -d

The request/response messages coming from the api will be written to .opencode/messages, and the .opencode/debug.log has been cleaned up to show logger source and which messages file the current response is written to.

Essentially the issue with the Copilot API's streaming response, is that there is not method for tool delimination (e.g. Anthropic has ContentBlockStartEvent with event.ContentBlock.Type == "tool_use". The OpenAI SDK doesn't appear to have one for streaming responses (or at a minimum Copilot is not using it). As a result the ChatCompletionAccumulator will concatinate the data from all the tool calls into one tool call, for example 4 view tool calls in a single response will accumulate into a single tool call with the following structure:

{
  "id": "msg_vrtx_01SKV5kFauTpBjDsHnX7vdbY",
  "choices": [
    {
      "finish_reason": "tool_calls",
      "index": 0,
      "logprobs": {
        "content": null,
        "refusal": null
      },
      "message": {
        "content": "I'll examine the existing customer controller and models to understand the pattern, then create a similar user controller.",
        "refusal": "",
        "role": "assistant",
        "annotations": null,
        "audio": {
          "id": "",
          "data": "",
          "expires_at": 0,
          "transcript": ""
        },
        "function_call": {
          "arguments": "",
          "name": ""
        },
        "tool_calls": [
          {
            "id": "toolu_vrtx_01CEjLz5xTHRhGyaM8D9o6Jq",
            "function": {
              "arguments": "{\"file_path\": \"/Users/bryan/code/work/randoapp/pkg/services/customers_ctlr.go\"}{\"file_path\": \"/Users/bryan/code/work/randoapp/pkg/models/customer.go\"}{\"file_path\": \"/Users/bryan/code/work/randoapp/pkg/models/user.go\"}{\"file_path\": \"/Users/bryan/code/work/randoapp/db/migrations/cstore/20250228010158_CreateUsersTable.up.sql\"}",
              "name": "viewviewviewview"
            },
            "type": "function"
          }
        ]
      }
    }
  ],
  "created": 1750093719,
  "model": "claude-sonnet-4",
  "object": "chat.completion",
  "service_tier": "",
  "system_fingerprint": "",
  "usage": {
    "completion_tokens": 277,
    "prompt_tokens": 22544,
    "total_tokens": 22821,
    "completion_tokens_details": {
      "accepted_prediction_tokens": 0,
      "audio_tokens": 0,
      "reasoning_tokens": 0,
      "rejected_prediction_tokens": 0
    },
    "prompt_tokens_details": {
      "audio_tokens": 0,
      "cached_tokens": 0
    }
  }
}

Essentially all 4 view calls becomes viewviewviewview as the tool name, and all the call parameters become concatenated as well.

Sign up for free to join this conversation on GitHub. Already have an account? Sign in to comment
Labels
None yet
Projects
None yet
Development

Successfully merging this pull request may close these issues.

FEATURE: Add github copilot model
3 participants