Skip to content

Commit

Permalink
Added mew Config Builder scripts
Browse files Browse the repository at this point in the history
  • Loading branch information
haseeb-heaven committed Mar 17, 2024
1 parent eb2b4e4 commit d044d4f
Show file tree
Hide file tree
Showing 4 changed files with 72 additions and 6 deletions.
10 changes: 5 additions & 5 deletions .config
Original file line number Diff line number Diff line change
@@ -1,17 +1,17 @@
# The temperature parameter controls the randomness of the model's output. Lower values make the output more deterministic.
temperature = 0.1
temperature = 0.1

# The maximum number of new tokens that the model can generate.
max_tokens = 1024
max_tokens = 1024

# The start separator for the generated code.
start_sep = ```
start_sep = ```

# The end separator for the generated code.
end_sep = ```
end_sep = ```

# If True, the first line of the generated text will be skipped.
skip_first_line = False
skip_first_line = False

# The model used for generating the code.
HF_MODEL = 'gpt-3.5-turbo'
9 changes: 8 additions & 1 deletion README.md
Original file line number Diff line number Diff line change
Expand Up @@ -248,14 +248,21 @@ To integrate your own API server for OpenAI instead of the default server, follo
4. Save and close the file.
Now, whenever you select the `gpt-3.5-turbo` or `gpt-4` model, the system will automatically use your custom server.

### **Steps to add new Hugging Face model**
## **Steps to add new Hugging Face model**

### **Manual Method**
1. 📋 Copy the `.config` file and rename it to `configs/hf-model-new.config`.
2. 🛠️ Modify the parameters of the model like `start_sep`, `end_sep`, `skip_first_line`.
3. 📝 Set the model name from Hugging Face to `HF_MODEL = 'Model name here'`.
4. 🚀 Now, you can use it like this: `python interpreter.py -m 'hf-model-new' -md 'code' -e`.
5. 📁 Make sure the `-m 'hf-model-new'` matches the config file inside the `configs` folder.

### **Automatic Method**
1. 🚀 Go to the `scripts` directory and run the `config_builder` script .
2. 🔧 For Linux/MacOS, run `config_builder.sh` and for Windows, run `config_builder.bat` .
3. 📝 Follow the instructions and enter the model name and parameters.
4. 📋 The script will automatically create the `.config` file for you.

## Star History

<a href="https://star-history.com/#haseeb-heaven/open-code-interpreter&Date">
Expand Down
26 changes: 26 additions & 0 deletions scripts/config_builder.bat
Original file line number Diff line number Diff line change
@@ -0,0 +1,26 @@
@echo off
set /p config_name="Enter the config file name: "
set /p start_sep="Enter the start separator: "
set /p end_sep="Enter the end separator: "
set /p skip_first_line="Enter skip_first_line (True/False): "
set /p model_name="Enter the model name: "

if "%model_name%"=="" (
echo Error: Model name is required.
exit /b
)

(
echo # The temperature parameter controls the randomness of the model's output. Lower values make the output more deterministic.
echo temperature = 0.1
echo # The maximum number of new tokens that the model can generate.
echo max_tokens = 1024
echo # The start separator for the generated code.
echo start_sep = %start_sep%
echo # The end separator for the generated code.
echo end_sep = %end_sep%
echo # If True, the first line of the generated text will be skipped.
echo skip_first_line = %skip_first_line%
echo # The model used for generating the code.
echo HF_MODEL = '%model_name%'
) > configs\%config_name%.config
33 changes: 33 additions & 0 deletions scripts/config_builder.sh
Original file line number Diff line number Diff line change
@@ -0,0 +1,33 @@
#!/bin/bash

read -p "Enter the config file name: " config_name
read -p "Enter the start separator: " start_sep
read -p "Enter the end separator: " end_sep
read -p "Enter skip_first_line (True/False): " skip_first_line
read -p "Enter the model name: " model_name

if [ -z "$model_name" ]; then
echo "Error: Model name is required."
exit 1
fi

cat > configs/"$config_name".config << EOF
# The temperature parameter controls the randomness of the model's output. Lower values make the output more deterministic.
temperature = 0.1
# The maximum number of new tokens that the model can generate.
max_tokens = 1024
# The start separator for the generated code.
start_sep = $start_sep
# The end separator for the generated code.
end_sep = $end_sep
# If True, the first line of the generated text will be skipped.
skip_first_line = $skip_first_line
# The model used for generating the code.
HF_MODEL = $model_name
EOF

0 comments on commit d044d4f

Please sign in to comment.