Skip to content

An opinionated hybrid boilerplate with python backend and react-ts frontend, dockerized for deployment. Uses language model chaining to sequentially generate multi-modal (images and text) content from micro prompts.

Notifications You must be signed in to change notification settings

sinhaGuild/storyboard-ai

Repository files navigation

Python-Typescript LLM Application

##boilerplate

Foundationally this is a boilerplate project, intended to help bootstrap a fullstack hybrid application with a python-based backend and a typescript-react based front end. For context, we're going to build an auto-storyboarding AI using large language model chaining techniques. It takes in short-form directional prompt to generate long form content full length stories.

Inference performance will vary depending on the service, but here are some rought benchmarks..

  1. Azure Openai Private EP (what I'm using) ~ 24 secs
  2. Openai (gpt, dalle) API endpoints ~ 92 secs
  3. with Stable diffusion (instead of dalle) ~ 232 secs

Demo

Init

Response

Stack

Backend

  • FastAPI with pydantic typed models server for routing requests
  • Langchain LLM library for sequential prompt chaining
  • gpt3.5-turbo - Reference Text-Transformer
  • dalle - Reference VIT

Frontend

  • Vite-Typescript React bootstrapping and typescript support + tailwind css.

  • Radix UI Primitives Scalable high performance primitives.

  • ReduxJS Toolkit Global state management for input and reusable components.

  • Roadmap Deployment [optional] Deployed as a set of docker containers on heroku cloud.

  • Docker Compose containerize fastapi (uvicorn container) and react

  • Heroku CLI allows deployment of containerized apps through git.

How to use

  1. Clone this repo.
git clone https://github.com/sinhaguild/storyboard-ai
  1. Create .env file
#.env
mv backend/.env.example backend/.env
  1. Set openai API key.
OPENAI_API_KEY='your-api-key-here'
  1. Run (with Docker)
docker compose up -d
  1. Cleanup
docker compose down -V

Running without docker

  1. Clone the repo and set environment variables.
  2. Run server
cd backend
python3 -m venv .venv
source .venv/bin/activate
uvicorn app.main:app --reload
  1. Run client
cd frontend
npm run dev

About

An opinionated hybrid boilerplate with python backend and react-ts frontend, dockerized for deployment. Uses language model chaining to sequentially generate multi-modal (images and text) content from micro prompts.

Topics

Resources

Stars

Watchers

Forks

Packages

No packages published