This repository contains the code for the blog post originally published on the Stream blog here.
All the background can be found there.
To get the project up and running, follow the post. In addition, you need two things:
- a Python setup with the required packages installed.
- an OpenAI API key
A more thorough explanation can be found in the blog post but here are the quick commands to set up Python (we tested with version 3.9). Run the following commands in your project folder:
python -m venv llm-env
source llm-env/bin/activate # this is for macOS/Linux, on Windows run llm-env/bin/activate
pip install -r requirements.txt
Then, get your OpenAI API key from their site. Once you have it, open up .env.local
and replace <add-openai-api-key-here>
with your key (no "
needed).
With that, you have the project ready to run. Enjoy using it and let us know what you build with it.
Stream allows developers to rapidly deploy scalable feeds, chat messaging, and video with an industry-leading 99.999% uptime SLA guarantee.
Stream provides UI components and state handling that make it easy to build video calling for your app. All calls run on Stream’s network of edge servers around the world, ensuring optimal latency and reliability.
Stream is free for most side and hobby projects. To qualify, your project/company needs to have < 5 team members and < $10k in monthly revenue. Makers get $100 in monthly credit for video for free. For more details, check out the Maker Account.