Skip to content

kpthedev/ez-text2video

Folders and files

NameName
Last commit message
Last commit date

Latest commit

 

History

26 Commits
 
 
 
 
 
 
 
 
 
 
 
 

Repository files navigation

ez-text2vid

A Streamlit app to easily run the ModelScope text-to-video diffusion model with customized video length, fps, and dimensions. It can run on 4GB video cards, as well as CPU and Apple M chips.

Built with:

Installation

Before installing, make sure you have working git and conda installations. If you have an Nvidia graphics card, you should also install CUDA.

Install Steps:

  1. Open a terminal on your machine. On Windows, you should use the Anaconda Prompt terminal.

  2. Clone this repo using git:

    git clone https://github.com/kpthedev/ez-text2video.git
    
  3. Open the folder:

    cd ez-text2video
    
  4. Create the conda environment:

    conda env create -f environment.yaml
    

Running

To run the app, make sure you are in the ez-text2video folder in your terminal. Then run these two commands to activate the conda environment and start the Streamlit app:

conda activate t2v
streamlit run app.py

This should open the webUI in your browser automatically.

The very first time you run the app, it will automatically download the models from Huggingface. This may a couple of minutes (~5 mins).

License

All the original code that I have written is licensed under a GPL license. For the text-to-video model license and conditions please refer to the model card.

Changelog

  • Mar 31, 2023 - Inital release
  • April 1, 2023 - Switch to conda install
  • June 2, 2023 - Move to stable version of diffusers

About

Easily run text-to-video diffusion with customized video length, fps, and dimensions on 4GB video cards or on CPU.

Topics

Resources

License

Stars

Watchers

Forks

Languages