Skip to content

A quick, dirty hack to get GitHub Copilot Chat to use GPT4 using api keys

Notifications You must be signed in to change notification settings

ClaraLeigh/proxy-for-copilot

Folders and files

NameName
Last commit message
Last commit date

Latest commit

 

History

5 Commits
 
 
 
 
 
 
 
 
 
 

Repository files navigation

Update

It looks like GH Chat now uses GPT4 by default, however I will leave this repo up incase someone wants to re-use it for a different use case, like using a local LLM.

Custom Proxy for GitHub Copilot Chat

This custom proxy forwards HTTP requests to their original destination, except when talking to the CoPilot chat endpoints. When it finds that endpoint it modifies the request to use GPT-4 using the main openai endpoint.

Requirements

  • Python 3.6 or higher
  • mitmproxy
  • python-dotenv

Installation

  1. Clone the repository:
git clone https://github.com/yourusername/custom-proxy.git
cd custom-proxy
  1. Install the required packages:
pip install -r requirements.txt
  1. Create a .env file in the project directory and add your OpenAI API key:
OPENAI_API_KEY=your_openai_api_key

Replace your_openai_api_key with your actual API key.

Usage

  1. Start the proxy server:
mitmdump -s proxy.py -p 8090

This command starts the proxy server on port 8090.

  1. Configure your application to use the proxy server by setting the HTTP_PROXY and HTTPS_PROXY environment variables to http://localhost:8090.

  2. Run your application, and the proxy will intercept and modify the specified requests as described.

License

This project is licensed under the MIT License.

About

A quick, dirty hack to get GitHub Copilot Chat to use GPT4 using api keys

Resources

Stars

Watchers

Forks

Releases

No releases published

Packages

No packages published

Languages