Video generator for the Lysterfield Lake project.
🚨 Please note: 🚨 This repository is largely spaghetti code, intended purely as a resource for anyone looking to dive into aspects of how Lysterfield Lake was created. It represents the tasks completed on a Mac, while the AI heavy lifting was completed using Cog and a PC with a RTX 3060 GPU. Files specific to that process are in /pc-settings/ That said, it should paint some of the picture (see what I did there?) of how the project works.
The client for the app is available at superhighfives/lysterfield-lake
✋ You can learn more about how the project works here.
The pipeline for Lysterfield Lake is made up of a collection of bash scripts. They run a mixture of python scripts, and unix applications (like ffmpeg) to output videos and images.
On the python side, data is partially passed to and from the AI models using Cog. It runs three models, which it expects at the following locations on your network:
| Model | Replicate | GitHub | Location |
|---|---|---|---|
| DiffusionCLIP | gwang-kim/diffusionclip | gwang-kim/DiffusionCLIP | Running on http://localhost:5000 |
| ZoeDepth | cjwbw/zoedepth | chenxwh/ZoeDepth | Running on http://localhost:5005 |
| Real-ESRGAN | cjwbw/real-esrgan | xinntao/Real-ESRGAN | Running on http://localhost:5010 |
The other models should be added to the root folder, in the paths referenced in the source files.