Here is the repo for running your prompts locally.
As it is now, for each new skill you have to copy the whole prompt and just replace the skill.
Create new directories as needed in prompts and responses. Both generate.py and generate_all.py file and directory arguments are relative to these both directories (see examples below).
- Directory structure
- Setup
- Running a single prompt (
generate.py) - Running all prompts in a directory (
generate_all.py)
promptscontains all promptspromtps/examplescontains example prompts- Create additional directories here as needed
responsescontains all responsesresponses/examplescontains example responses- Create additional directories here as needed
srccontains all Python filesgenerate_all.pygenerates responses for all prompts in a given directorygenerate.pygenerates a response for a single given promptprompting.pycontains functions for interacting with the Chat Completions APIutil.pycontains utility functions for reading and writing to files
Store your OpenAI API key in an environmental variable named OPENAI_API_KEY_KTH. This is retreived in the script using the get_api_key() in prompting.py.
To run you need to setup the Python virtual environment in the .venv directory. When in the root qbl-generate/ directory, use the following command to create the environment:
python3 -m venv .venvThen activate it:
source .venv/bin/activateYou should then see a little (.venv) to the left in the CLI prompt, like so:
(.venv) [orn:~/Documents/kth/da150x-kex/question-generation/src]%The final step is to install the dependencies in requirements.txt. Do this by running:
pip install -r requirements.txtAfter that you should be all set to run the scripts in the virtual environment!
To deactivate the virtual environment after you are done, simply use:
deactivateMake sure to complete the setup first.
To generate a response for a single prompt, use:
python generate.py <file-path-without-extension> [<number-of-questions>]This will generate a response from the prompt in prompts/<file-path-without-extension>.in and put the result in responses/<file-path-without-extension>.out.
If not specified, the default number of questions is 3.
NOTE: The file name is given without file extension. Input files (prompts) end in .in and outpus files (generated questions) end in .out for clarity, otherwise their names are identical.
An example for prompts/examples/using_waitgroups.in:
# 3 questions by default
python generate.py examples/using_waitgroups
# 5 questions
python generate.py examples/using_waitgroups 5Make sure to complete the setup first.
To generate responses for all prompts in a directory, use:
python generate_all.py <directory-path> [<number-of-questions>]This will generate responses for all prompts in prompts/<directory-path> and put the results in responses/<directory-path>.
If not specified, the default number of questions per response is 3.
NOTE: The directory path must end with a slash, since it is handled as a file otherwise.
An example for prompts/examples/:
# 3 questions by default
python generate_all.py examples/
# 5 questions
python generate_all.py examples/ 5