Skip to content

koompi/claude-code-copilot

Folders and files

NameName
Last commit message
Last commit date

Latest commit

 

History

1 Commit
 
 
 
 
 
 
 
 
 
 
 
 
 
 

Repository files navigation

claude-code-copilot

Use your Claude Code Max and GitHub Copilot Pro subscriptions together — never run out of tokens.

This installs three things:

  1. copilot-ask — call any GitHub Copilot model (Claude Opus 4.6, GPT-5.4, Grok, …) from the shell. Claude Code can invoke it mid-task as a delegation tool: second opinions, cheap fan-out, fallback reasoning.
  2. claude-cop — a shell function that launches a full Claude Code session routed to Copilot's Claude API instead of Anthropic's. Use it when you hit an Anthropic rate limit; it burns Copilot quota instead.
  3. Autonomy patches to ~/.claude/settings.json — auto-mode by default, copilot-* helpers allowlisted, proactive mode on.

Why

You pay for both Claude Code Max and GitHub Copilot Pro. Claude Code talks to Anthropic by default and ignores Copilot's model pool entirely. This bridge lets one session reach across both.

  • Main session: Claude Code with your Anthropic quota
  • Mid-task consults: copilot-ask -m gpt-5.4 "second opinion on X" — free-to-you, distinct reasoning
  • Rate-limited: claude-cop — same Claude Code UX, Copilot quota

Prerequisites

  • Linux or macOS
  • bash or zsh
  • gh authenticated (gh auth login)
  • jq, curl
  • Claude Code (install) — optional for copilot-ask, required for claude-cop
  • An active GitHub Copilot subscription (Pro, Pro+, or Business)

Install

One-liner:

curl -fsSL https://raw.githubusercontent.com/koompi/claude-code-copilot/main/install.sh | bash

Or clone and install:

git clone https://github.com/koompi/claude-code-copilot.git
cd claude-code-copilot
./scripts/install.sh

Re-running either is safe — the installer is idempotent and uses marker blocks in shell rc files for clean updates.

Verify

copilot-ask -m claude-sonnet-4.6 "say ok"
copilot-models
exec $SHELL && type claude-cop

Inside a Claude Code session, the model will now autonomously reach for copilot-ask when it wants a second opinion — no permission prompt, the allowlist covers it.

Usage

copilot-ask — one-shot model calls

copilot-ask -m claude-opus-4.6 "plan the migration from X to Y given: ..."
copilot-ask -m gpt-5.4 -f src/foo.rs "review this file for races"
git diff | copilot-ask -m gpt-5.3-codex "spot bugs in this diff"
cat error.log | copilot-ask -m claude-sonnet-4.6 "what's failing and why"

Flags: -m MODEL · -f FILE · -s SYSTEM_PROMPT · -r (no default system) · -t N (max output tokens).

copilot-models — curated model list

copilot-models          # curated "best" set
copilot-models --all    # every chat model exposed by the API
copilot-models --json   # machine-readable

copilot-stats — usage rollup

copilot-stats           # summary + per-model counts
copilot-stats --tail 20 # last 20 calls
copilot-stats --raw     # raw log

claude-cop / claude-cop-opus — rate-limit fallback

claude-cop                 # interactive Claude Sonnet 4.6 via Copilot
claude-cop -p "one-shot"   # headless
claude-cop-opus            # same thing, Claude Opus 4.6

Supported models

Verified working via copilot-ask:

Model Strength
claude-opus-4.6 Deep reasoning, planning, architecture
claude-sonnet-4.6 Balanced coding — default second-opinion
gpt-5.4 OpenAI flagship — contrarian perspective
gpt-5.4-mini Fast/cheap OpenAI — quick scans
gpt-5.3-codex Coding specialist — refactor review
gpt-5.2-codex Coding specialist backup
grok-code-fast-1 xAI fast reviewer — distinct training corpus
goldeneye-free-auto Free pool — use for throwaway delegation

Gemini in Copilot isn't available on Pro plans as of this writing — it flickers into the model list but API calls return model_not_supported. If your org enables it, copilot-ask -m gemini-2.5-pro "..." will just start working, no script changes.

How it works

  • copilot-ask hits api.githubcopilot.com directly with your gh auth token, auto-routing between /chat/completions and /responses depending on the model. Streams SSE. Logs every call to ~/.claude/copilot-asks.log.
  • claude-cop exports ANTHROPIC_BASE_URL=https://api.githubcopilot.com and passes the gh token as ANTHROPIC_AUTH_TOKEN. Claude Code speaks to Copilot's Anthropic-compatible endpoint transparently.
  • Settings patches set permissions.defaultMode: "auto", add Bash(copilot-*) to the allowlist, and enable proactive.autoEnable so background work keeps moving without permission prompts.
  • rules/copilot.md is @included from ~/.claude/CLAUDE.md, teaching Claude Code when to reach for copilot-ask vs direct tools.

Uninstall

~/.local/share/claude-code-copilot/scripts/uninstall.sh
# or if you cloned elsewhere:
./scripts/uninstall.sh

Removes the binaries, rules file, shell functions (marker-block aware), and settings patches. Use --purge-log to also delete ~/.claude/copilot-asks.log.

Security notes

  • No secrets are baked into this repo. Tokens come from gh auth token at runtime — each user authenticates with their own GitHub account.
  • copilot-ask prompts are sent to GitHub Copilot, which has its own data handling policy. Don't paste production credentials into prompts.
  • The installer only modifies files in your home directory. It does not touch /etc, systemd, or any shared resource.

License

MIT — see LICENSE.

Contributing

Issues and PRs welcome. If a new Copilot model shows up and copilot-ask doesn't route it correctly, the fix is usually one line in the endpoint case block in bin/copilot-ask.

About

Use Claude Code Max and GitHub Copilot Pro together — copilot-ask mid-task delegation, claude-cop fallback, autonomy patches.

Resources

License

Stars

Watchers

Forks

Releases

No releases published

Packages

 
 
 

Contributors

Languages