Skip to content

nghyane/copilot-api

Β 
Β 

Folders and files

NameName
Last commit message
Last commit date

Latest commit

Β 
Β 
Β 
Β 
Β 
Β 
Β 
Β 
Β 
Β 
Β 
Β 
Β 
Β 
Β 
Β 
Β 
Β 
Β 
Β 
Β 
Β 
Β 
Β 
Β 
Β 
Β 
Β 
Β 
Β 
Β 
Β 
Β 

Repository files navigation

Copilot API

⚠️ EDUCATIONAL PURPOSE ONLY ⚠️ This project is a reverse-engineered implementation of the GitHub Copilot API created for educational purposes only. It is not officially supported by GitHub and should not be used in production environments.

ko-fi

Project Overview

A wrapper around GitHub Copilot API to make it OpenAI compatible, making it usable for other tools like AI assistants, local interfaces, and development utilities.

✨ Latest Features (v1.0.1-beta.1)

  • πŸ” Smart Format Detection: Automatically detects Anthropic vs OpenAI request formats
  • πŸš€ Enhanced Streaming: Improved streaming response handling with better error management
  • πŸ€– Universal Model Support: Support for all GitHub Copilot models including Claude (Anthropic) models
  • 🐳 Multi-Architecture Docker: Docker images support both AMD64 and ARM64 architectures
  • ⚑ Optimized Performance: Simplified request processing with better compatibility
  • πŸ› οΈ Improved Error Handling: Better error messages and debugging capabilities

Demo

copilot-api-demo.mp4

Prerequisites

  • Bun (>= 1.2.x)
  • GitHub account with Copilot subscription (Individual or Business)

Installation

To install dependencies, run:

bun install

Using with Docker

Pre-built Images (Recommended)

Pull and run the latest image from Docker Hub:

# Pull the latest image
docker pull nghyane/copilot-api:latest

# Run with GitHub token
docker run -p 4141:4141 -e GH_TOKEN=your_github_token nghyane/copilot-api:latest

Multi-Architecture Support

The Docker images support both AMD64 and ARM64 architectures:

# For ARM64 (Apple Silicon, ARM servers)
docker pull nghyane/copilot-api:latest-multiarch

# Run multi-arch image
docker run -p 4141:4141 -e GH_TOKEN=your_github_token nghyane/copilot-api:latest-multiarch

Build from Source

Build your own image:

docker build -t copilot-api .

Run the container:

docker run -p 4141:4141 -e GH_TOKEN=your_github_token copilot-api

Using with npx

You can run the project directly using npx:

npx copilot-api@latest start

With options:

npx copilot-api@latest start --port 8080

For authentication only:

npx copilot-api@latest auth

Command Structure

Copilot API now uses a subcommand structure with two main commands:

  • start: Start the Copilot API server (default command). This command will also handle authentication if needed.
  • auth: Run GitHub authentication flow without starting the server. This is typically used if you need to generate a token for use with the --github-token option, especially in non-interactive environments.

Command Line Options

Start Command Options

The following command line options are available for the start command:

Option Description Default Alias
--port Port to listen on 4141 -p
--verbose Enable verbose logging false -v
--business Use a business plan GitHub account false none
--enterprise Use an enterprise plan GitHub account false none
--manual Enable manual request approval false none
--rate-limit Rate limit in seconds between requests none -r
--wait Wait instead of error when rate limit is hit false -w
--github-token Provide GitHub token directly (must be generated using the auth subcommand) none -g

Auth Command Options

Option Description Default Alias
--verbose Enable verbose logging false -v

Example Usage

Using with npx:

# Basic usage with start command
npx copilot-api@latest start

# Run on custom port with verbose logging
npx copilot-api@latest start --port 8080 --verbose

# Use with a business plan GitHub account
npx copilot-api@latest start --business

# Use with an enterprise plan GitHub account
npx copilot-api@latest start --enterprise

# Enable manual approval for each request
npx copilot-api@latest start --manual

# Set rate limit to 30 seconds between requests
npx copilot-api@latest start --rate-limit 30

# Wait instead of error when rate limit is hit
npx copilot-api@latest start --rate-limit 30 --wait

# Provide GitHub token directly
npx copilot-api@latest start --github-token ghp_YOUR_TOKEN_HERE

# Run only the auth flow
npx copilot-api@latest auth

# Run auth flow with verbose logging
npx copilot-api@latest auth --verbose

Running from Source

The project can be run from source in several ways:

Development Mode

bun run dev

Production Mode

bun run start

πŸ”§ Model Support

Supported Models

The API supports all GitHub Copilot models including:

  • OpenAI Models: GPT-4, GPT-4 Turbo, GPT-3.5 Turbo
  • Anthropic Models: Claude 3.5 Sonnet, Claude 3 Opus, Claude 3 Haiku
  • Other Models: As available through GitHub Copilot

Format Compatibility

  • Automatic Detection: The API automatically detects whether incoming requests are in OpenAI or Anthropic format
  • Universal Support: Works with tools that send either format (Cursor, Continue, etc.)
  • No Conversion Needed: Requests are processed in their original format for maximum compatibility

πŸ’‘ Usage Tips

  • Consider using free models (e.g., Gemini, Mistral, Openrouter) as the weak-model
  • Use architect mode sparingly
  • Disable yes-always in your aider configuration
  • Be mindful that Claude 3.7 thinking mode consumes more tokens
  • Enable the --manual flag to review and approve each request before processing
  • If you have a GitHub business or enterprise plan account with Copilot, use the --business or --enterprise flag respectively
  • For Claude models, the API maintains full compatibility with both OpenAI and Anthropic request formats

Manual Request Approval

When using the --manual flag, the server will prompt you to approve each incoming request:

? Accept incoming request? > (y/N)

This helps you control usage and monitor requests in real-time.

πŸš€ Technical Improvements

Enhanced Streaming

  • Better Error Handling: Improved stream interruption handling and connection management
  • Optimized Performance: Reduced latency and better resource utilization
  • Connection Resilience: Automatic cleanup of broken connections

Format Detection

  • Smart Detection: Automatically identifies Anthropic vs OpenAI request formats
  • No Conversion Overhead: Processes requests in their native format
  • Universal Compatibility: Works with any client that sends either format

Docker Enhancements

  • Multi-Architecture: Native support for AMD64 and ARM64
  • Optimized Images: Smaller image sizes with better caching
  • Production Ready: Multi-stage builds for optimal performance

πŸ” Troubleshooting

Common Issues

  1. Tool Calling Issues with Claude Models

    • Some Claude models may have limited tool calling support
    • Try using different model variants if tool calls fail
  2. Format Detection Problems

    • The API automatically detects request formats
    • If you encounter issues, check the request structure matches OpenAI or Anthropic specs
  3. Docker Issues

    • Use the appropriate architecture image for your platform
    • Ensure GitHub token is properly set via environment variable

Debug Mode

Enable verbose logging for troubleshooting:

npx copilot-api@latest start --verbose

πŸ“¦ Available Images

Image Architecture Size Use Case
nghyane/copilot-api:latest AMD64 ~215MB Standard x86_64 systems
nghyane/copilot-api:latest-multiarch AMD64 + ARM64 ~274MB Universal compatibility
nghyane/copilot-api:v1.0.0 AMD64 ~215MB Specific version

πŸ“ Changelog

v1.0.1-beta.1 (Latest)

  • ✨ Added smart format detection for Anthropic vs OpenAI requests
  • πŸš€ Enhanced streaming response handling with better error management
  • πŸ€– Improved support for all GitHub Copilot models including Claude
  • 🐳 Added multi-architecture Docker support (AMD64 + ARM64)
  • ⚑ Optimized request processing and removed unnecessary format conversion
  • πŸ› οΈ Better error handling and debugging capabilities
  • πŸ”§ Simplified codebase with improved maintainability

v1.0.0

  • πŸŽ‰ Initial stable release
  • πŸ“¦ NPM package publication
  • 🐳 Docker support
  • πŸ” GitHub authentication flow
  • πŸ“Š Rate limiting and manual approval features

🀝 Contributing

Contributions are welcome! Please feel free to submit a Pull Request. For major changes, please open an issue first to discuss what you would like to change.

Development Setup

  1. Clone the repository
  2. Install dependencies: bun install
  3. Run in development mode: bun run dev
  4. Build for production: bun run build

πŸ“„ License

This project is for educational purposes only. Please respect GitHub's terms of service and use responsibly.

⭐ Support

If you find this project helpful, please consider:

  • ⭐ Starring the repository
  • πŸ› Reporting issues
  • πŸ’‘ Suggesting improvements
  • β˜• Supporting the developer

About

Turn GitHub Copilot into OpenAI API compatible server!

Resources

License

Stars

Watchers

Forks

Packages

No packages published

Languages

  • TypeScript 86.9%
  • JavaScript 10.0%
  • Dockerfile 2.1%
  • Shell 1.0%