To serve a better user experience, it is best to serve the text generated by the OpenAI completion model, as soon as it is generated. Sometimes when the response generated by the model is too long, returning the whole response at once can take sometime.
In this project, we will see how we can stream the response to the client as soon as it is generated.
- OpenAI API Key
- Clone the repo
- Run
npm install
to install all the dependencies - Create a
.env.local
fromenv-example
file:cp env-example .env.local
- Update the OpenAI key in the
.env.local
file - Run
npm run dev
to start the project
- NextJS
- TailwindCSS
- OpenAI