A client for OpenAI's GPT-3 API, intended to be used for more ad hoc testing of prompts without using the web interface. This approach allows more customization of the resulting output, ideal for livestreaming organic, non-cherry-picked outputs.
In addition to generating text via the API, the client:
- Streams text generation as soon as it's generated (via httpx)
- Prints the generated text to console, with a bolded prompt and coloring the text by prediction quality (via rich)
- Automatically saves each generated text to a file
- Automatically creates a sharable, social-media-optimized image (using imgmaker).
NB: This project is intended more for personal use, and as a result may not be as actively supported as my other projects intended for productionization.
First, install the Python requirements:
pip3 install httpx rich imgmaker
Then download this repostory and cd
into the downloaded folder.
Optionally, if you want to generate images of the generated text, you'll also need to have the latest version of Chrome installed and download the corresponding chromedriver (see the imgmaker repo for more information):
imgmaker chromedriver
You will also need to export your OpenAI API key to the OPENAI_API_SECRET_KEY
environment variable (this is more secure than putting it in a plaintext file). In bash/zsh, you can do this by:
export OPENAI_API_SECRET_KEY=<key>
This GPT-3 client is intended to be interacted with entirely via the command line . (due to how rich works, it will not work well in a Notebook)
You can invoke the client via:
python3 gpt3.py
The generation output will be stored in a text file in the txt_output
folder, corresponding to a hash of the prompt and the temperature (e.g. e286222c__0_7.txt
)
Passing --image
will render the output into a social-media sharable image.
python3 gpt3.py --image --max_tokens 256 --include_coloring True
By default, it will generate from the prompt "Once upon a time". To specify a custom prompt, you can do:
python3 gpt3.py --prompt "I am a pony"
For longer prompts, you can put the prompt in a prompt.txt
file instead:
python3 gpt3.py --prompt "prompt.txt" --max_tokens 256
image
: Whether to render an image of the generated text (requires imgmaker) [Default: False]prompt
: Prompt for GPT-3; either text or a path to a file. [Default: "Once upon a time"]temperature
: Generation creativity; the higher, the crazier. [Default: 0.7]max_tokens
: Number of tokens generated [Default: 0.7]stop
: Token to use to stop generation. [Default: ""]bg
: RGB tuple representing the base for coloration: you should set to the background of the terminal. [Default: (31, 36, 40)]accent
: Accent color to blend with thebg
to indicate prediction strength. [Default: (0, 64, 0)]interactive
: Enters an "interactive" mode where you can provide prompts repeatedly. [Default: False]pngquant
: If usingimage
, uses pngquant to massively compress it. Requires pngquant installed on your system. [Default: False]output_txt
: Specify output text file, overriding default behavior if not None. [Default: None]output_img
: Specify output image file, overriding default behavior if not None. [Default: None]include_prompt
: Include prompt in the generation output. [Default: False]include_coloring
: Use probability coloring [Default: True]watermark
: Specify watermark text on the generated image. Supports Markdown formatting. [Default: "Generated using GPT-3 via OpenAI's API"]
Max Woolf (@minimaxir)
Max's open-source projects are supported by his Patreon and GitHub Sponsors. If you found this project helpful, any monetary contributions to the Patreon are appreciated and will be put to good creative use.
MIT
This repo has no affiliation with OpenAI.