It looks like GH Chat now uses GPT4 by default, however I will leave this repo up incase someone wants to re-use it for a different use case, like using a local LLM.
This custom proxy forwards HTTP requests to their original destination, except when talking to the CoPilot chat endpoints. When it finds that endpoint it modifies the request to use GPT-4 using the main openai endpoint.
- Python 3.6 or higher
- mitmproxy
- python-dotenv
- Clone the repository:
git clone https://github.com/yourusername/custom-proxy.git
cd custom-proxy
- Install the required packages:
pip install -r requirements.txt
- Create a
.env
file in the project directory and add your OpenAI API key:
OPENAI_API_KEY=your_openai_api_key
Replace your_openai_api_key
with your actual API key.
- Start the proxy server:
mitmdump -s proxy.py -p 8090
This command starts the proxy server on port 8090.
-
Configure your application to use the proxy server by setting the
HTTP_PROXY
andHTTPS_PROXY
environment variables tohttp://localhost:8090
. -
Run your application, and the proxy will intercept and modify the specified requests as described.
This project is licensed under the MIT License.