Read in other language: 中文.
A local implementation of OpenAI's ChatGPT Code Interpreter (Advanced Data Analysis).
OpenAI's Code Interpreter (currently renamed as Advanced Data Analysis) for ChatGPT is a revolutionary feature that allows the execution of Python code within the AI model. However, it execute code within an online sandbox and has certain limitations. In this project, we present Local Code Interpreter – which enables code execution on your local device, offering enhanced flexibility, security, and convenience.
-
Custom Environment: Execute code in a customized environment of your choice, ensuring you have the right packages and settings.
-
Seamless Experience: Say goodbye to file size restrictions and internet issues while uploading. With Local Code Interpreter, you're in full control.
-
GPT-3.5 Availability: While official Code Interpreter is only available for GPT-4 model, the Local Code Interpreter offers the flexibility to switch between both GPT-3.5 and GPT-4 models.
-
Enhanced Data Security: Keep your data more secure by running code locally, minimizing data transfer over the internet.
Executing AI-generated code without human review on your own device is not safe. You are responsible for taking measures to protect the security of your device and data (such as using a virtural machine) before launching this program. All consequences caused by using this program shall be borne by youself.
-
Clone this repository to your local device
git clone https://github.com/MrGreyfun/Local-Code-Interpreter.git cd Local-Code-Interpreter
-
Install the necessary dependencies. The program has been tested on Windows 10 and CentOS Linux 7.8, with Python 3.9.16. Required packages include:
Jupyter Notebook 6.5.4 gradio 3.39.0 openai 0.27.8
Other systems or package versions may also work. You can use the following command to directly install the required packages:
pip install -r requirements.txt
For newcomers to Python, we offer a convenient command that installs additional packages commonly used for data processing and analysis:
pip install -r requirements_full.txt
-
Create a
config.json
file in thesrc
directory, following the examples provided in theconfig_example
directory. -
Configure your API key in the
config.json
file.
Please Note:
-
Set the
model_name
Correctly This program relies on the function calling capability of the0613
version of models:gpt-3.5-turbo-0613
(and its' 16K version)gpt-4-0613
(and its' 32K version)
Older versions of the models will not work.
For Azure OpenAI service users:
- Set the
model_name
as your deployment name. - Confirm that the deployed model corresponds to the
0613
version.
-
API Version Settings If you're using Azure OpenAI service, set the
API_VERSION
to2023-07-01-preview
in theconfig.json
file. Note that other API versions do not support the necessary function calls for this program. -
Alternate API Key Handling If you prefer not to store your API key in the
config.json
file, you can opt for an alternate approach:- Leave the
API_KEY
field inconfig.json
as an empty string:"API_KEY": ""
- Set the environment variable
OPENAI_API_KEY
with your API key before running the program:- On Windows:
set OPENAI_API_KEY=<YOUR-API-KEY>
- On Linux:
export OPENAI_API_KEY=<YOUR-API-KEY>
- Leave the
-
Navigate to the
src
directory.cd src
-
Run the command:
python web_ui.py
-
Access the generated link in your browser to start using the Local Code Interpreter.
Imagine uploading a data file and requesting the model to perform linear regression and visualize the data. See how Local Code Interpreter provides a seamless experience: