Skip to content

A script for generating pure question-based learning (pQBL) quiz questions using ChatGPT.

Notifications You must be signed in to change notification settings

eeegl/qbl-generate

Folders and files

NameName
Last commit message
Last commit date

Latest commit

 

History

12 Commits
 
 
 
 
 
 
 
 
 
 
 
 

Repository files navigation

QBL Question Generation

Here is the repo for running your prompts locally.

As it is now, for each new skill you have to copy the whole prompt and just replace the skill.

Create new directories as needed in prompts and responses. Both generate.py and generate_all.py file and directory arguments are relative to these both directories (see examples below).

Table of contents

Directory structure

  • prompts contains all prompts
    • promtps/examples contains example prompts
    • Create additional directories here as needed
  • responses contains all responses
    • responses/examples contains example responses
    • Create additional directories here as needed
  • src contains all Python files
    • generate_all.py generates responses for all prompts in a given directory
    • generate.py generates a response for a single given prompt
    • prompting.py contains functions for interacting with the Chat Completions API
    • util.py contains utility functions for reading and writing to files

Setup

OpenAI API key

Store your OpenAI API key in an environmental variable named OPENAI_API_KEY_KTH. This is retreived in the script using the get_api_key() in prompting.py.

Virtual environment

To run you need to setup the Python virtual environment in the .venv directory. When in the root qbl-generate/ directory, use the following command to create the environment:

python3 -m venv .venv

Then activate it:

source .venv/bin/activate

You should then see a little (.venv) to the left in the CLI prompt, like so:

(.venv) [orn:~/Documents/kth/da150x-kex/question-generation/src]%

The final step is to install the dependencies in requirements.txt. Do this by running:

pip install -r requirements.txt

After that you should be all set to run the scripts in the virtual environment!

To deactivate the virtual environment after you are done, simply use:

deactivate

Running a single prompt (generate.py)

Make sure to complete the setup first.

To generate a response for a single prompt, use:

python generate.py <file-path-without-extension> [<number-of-questions>]

This will generate a response from the prompt in prompts/<file-path-without-extension>.in and put the result in responses/<file-path-without-extension>.out.

If not specified, the default number of questions is 3.

NOTE: The file name is given without file extension. Input files (prompts) end in .in and outpus files (generated questions) end in .out for clarity, otherwise their names are identical.

Example for prompt file

An example for prompts/examples/using_waitgroups.in:

# 3 questions by default
python generate.py examples/using_waitgroups

# 5 questions
python generate.py examples/using_waitgroups 5

Running all prompts in a directory (generate_all.py)

Make sure to complete the setup first.

To generate responses for all prompts in a directory, use:

python generate_all.py <directory-path> [<number-of-questions>]

This will generate responses for all prompts in prompts/<directory-path> and put the results in responses/<directory-path>.

If not specified, the default number of questions per response is 3.

NOTE: The directory path must end with a slash, since it is handled as a file otherwise.

Example for prompt directory

An example for prompts/examples/:

# 3 questions by default
python generate_all.py examples/

# 5 questions
python generate_all.py examples/ 5

About

A script for generating pure question-based learning (pQBL) quiz questions using ChatGPT.

Resources

Stars

Watchers

Forks

Releases

No releases published

Packages

 
 
 

Languages