A wrapper around GitHub Copilot API to make it OpenAI compatible, making it usable for other tools like AI assistants, local interfaces, and development utilities.
- π Smart Format Detection: Automatically detects Anthropic vs OpenAI request formats
- π Enhanced Streaming: Improved streaming response handling with better error management
- π€ Universal Model Support: Support for all GitHub Copilot models including Claude (Anthropic) models
- π³ Multi-Architecture Docker: Docker images support both AMD64 and ARM64 architectures
- β‘ Optimized Performance: Simplified request processing with better compatibility
- π οΈ Improved Error Handling: Better error messages and debugging capabilities
copilot-api-demo.mp4
- Bun (>= 1.2.x)
- GitHub account with Copilot subscription (Individual or Business)
To install dependencies, run:
bun install
Pull and run the latest image from Docker Hub:
# Pull the latest image
docker pull nghyane/copilot-api:latest
# Run with GitHub token
docker run -p 4141:4141 -e GH_TOKEN=your_github_token nghyane/copilot-api:latest
The Docker images support both AMD64 and ARM64 architectures:
# For ARM64 (Apple Silicon, ARM servers)
docker pull nghyane/copilot-api:latest-multiarch
# Run multi-arch image
docker run -p 4141:4141 -e GH_TOKEN=your_github_token nghyane/copilot-api:latest-multiarch
Build your own image:
docker build -t copilot-api .
Run the container:
docker run -p 4141:4141 -e GH_TOKEN=your_github_token copilot-api
You can run the project directly using npx:
npx copilot-api@latest start
With options:
npx copilot-api@latest start --port 8080
For authentication only:
npx copilot-api@latest auth
Copilot API now uses a subcommand structure with two main commands:
start
: Start the Copilot API server (default command). This command will also handle authentication if needed.auth
: Run GitHub authentication flow without starting the server. This is typically used if you need to generate a token for use with the--github-token
option, especially in non-interactive environments.
The following command line options are available for the start
command:
Option | Description | Default | Alias |
---|---|---|---|
--port | Port to listen on | 4141 | -p |
--verbose | Enable verbose logging | false | -v |
--business | Use a business plan GitHub account | false | none |
--enterprise | Use an enterprise plan GitHub account | false | none |
--manual | Enable manual request approval | false | none |
--rate-limit | Rate limit in seconds between requests | none | -r |
--wait | Wait instead of error when rate limit is hit | false | -w |
--github-token | Provide GitHub token directly (must be generated using the auth subcommand) |
none | -g |
Option | Description | Default | Alias |
---|---|---|---|
--verbose | Enable verbose logging | false | -v |
Using with npx:
# Basic usage with start command
npx copilot-api@latest start
# Run on custom port with verbose logging
npx copilot-api@latest start --port 8080 --verbose
# Use with a business plan GitHub account
npx copilot-api@latest start --business
# Use with an enterprise plan GitHub account
npx copilot-api@latest start --enterprise
# Enable manual approval for each request
npx copilot-api@latest start --manual
# Set rate limit to 30 seconds between requests
npx copilot-api@latest start --rate-limit 30
# Wait instead of error when rate limit is hit
npx copilot-api@latest start --rate-limit 30 --wait
# Provide GitHub token directly
npx copilot-api@latest start --github-token ghp_YOUR_TOKEN_HERE
# Run only the auth flow
npx copilot-api@latest auth
# Run auth flow with verbose logging
npx copilot-api@latest auth --verbose
The project can be run from source in several ways:
bun run dev
bun run start
The API supports all GitHub Copilot models including:
- OpenAI Models: GPT-4, GPT-4 Turbo, GPT-3.5 Turbo
- Anthropic Models: Claude 3.5 Sonnet, Claude 3 Opus, Claude 3 Haiku
- Other Models: As available through GitHub Copilot
- Automatic Detection: The API automatically detects whether incoming requests are in OpenAI or Anthropic format
- Universal Support: Works with tools that send either format (Cursor, Continue, etc.)
- No Conversion Needed: Requests are processed in their original format for maximum compatibility
- Consider using free models (e.g., Gemini, Mistral, Openrouter) as the
weak-model
- Use architect mode sparingly
- Disable
yes-always
in your aider configuration - Be mindful that Claude 3.7 thinking mode consumes more tokens
- Enable the
--manual
flag to review and approve each request before processing - If you have a GitHub business or enterprise plan account with Copilot, use the
--business
or--enterprise
flag respectively - For Claude models, the API maintains full compatibility with both OpenAI and Anthropic request formats
When using the --manual
flag, the server will prompt you to approve each incoming request:
? Accept incoming request? > (y/N)
This helps you control usage and monitor requests in real-time.
- Better Error Handling: Improved stream interruption handling and connection management
- Optimized Performance: Reduced latency and better resource utilization
- Connection Resilience: Automatic cleanup of broken connections
- Smart Detection: Automatically identifies Anthropic vs OpenAI request formats
- No Conversion Overhead: Processes requests in their native format
- Universal Compatibility: Works with any client that sends either format
- Multi-Architecture: Native support for AMD64 and ARM64
- Optimized Images: Smaller image sizes with better caching
- Production Ready: Multi-stage builds for optimal performance
-
Tool Calling Issues with Claude Models
- Some Claude models may have limited tool calling support
- Try using different model variants if tool calls fail
-
Format Detection Problems
- The API automatically detects request formats
- If you encounter issues, check the request structure matches OpenAI or Anthropic specs
-
Docker Issues
- Use the appropriate architecture image for your platform
- Ensure GitHub token is properly set via environment variable
Enable verbose logging for troubleshooting:
npx copilot-api@latest start --verbose
Image | Architecture | Size | Use Case |
---|---|---|---|
nghyane/copilot-api:latest |
AMD64 | ~215MB | Standard x86_64 systems |
nghyane/copilot-api:latest-multiarch |
AMD64 + ARM64 | ~274MB | Universal compatibility |
nghyane/copilot-api:v1.0.0 |
AMD64 | ~215MB | Specific version |
- β¨ Added smart format detection for Anthropic vs OpenAI requests
- π Enhanced streaming response handling with better error management
- π€ Improved support for all GitHub Copilot models including Claude
- π³ Added multi-architecture Docker support (AMD64 + ARM64)
- β‘ Optimized request processing and removed unnecessary format conversion
- π οΈ Better error handling and debugging capabilities
- π§ Simplified codebase with improved maintainability
- π Initial stable release
- π¦ NPM package publication
- π³ Docker support
- π GitHub authentication flow
- π Rate limiting and manual approval features
Contributions are welcome! Please feel free to submit a Pull Request. For major changes, please open an issue first to discuss what you would like to change.
- Clone the repository
- Install dependencies:
bun install
- Run in development mode:
bun run dev
- Build for production:
bun run build
This project is for educational purposes only. Please respect GitHub's terms of service and use responsibly.
If you find this project helpful, please consider:
- β Starring the repository
- π Reporting issues
- π‘ Suggesting improvements
- β Supporting the developer