title | emoji | colorFrom | colorTo | sdk | pinned | suggested_hardware |
---|---|---|---|---|---|---|
Real-Time Latent Consistency Model Image-to-Image |
🖼️🖼️ |
gray |
indigo |
docker |
false |
a10g-small |
This demo showcases Latent Consistency Model (LCM) using Diffusers with a MJPEG stream server.
You need a webcam to run this demo. 🤗
You need CUDA and Python 3.10 or a Mac with an M1/M2/M3 chip
TIMEOUT
: limit user session timeout
SAFETY_CHECKER
: disabled if you want NSFW filter off
MAX_QUEUE_SIZE
: limit number of users on current app instance
python -m venv venv
source venv/bin/activate
pip3 install -r requirements.txt
uvicorn "app-img2img:app" --host 0.0.0.0 --port 7860 --reload
python -m venv venv
source venv/bin/activate
pip3 install -r requirements.txt
uvicorn "app-txt2img:app" --host 0.0.0.0 --port 7860 --reload
or with environment variables
TIMEOUT=120 SAFETY_CHECKER=True MAX_QUEUE_SIZE=4 uvicorn "app-img2img:app" --host 0.0.0.0 --port 7860 --reload
If you're running locally and want to test it on Mobile Safari, the webserver needs to be served over HTTPS.
openssl req -newkey rsa:4096 -nodes -keyout key.pem -x509 -days 365 -out certificate.pem
uvicorn "app-img2img:app" --host 0.0.0.0 --port 7860 --reload --log-level info --ssl-certfile=certificate.pem --ssl-keyfile=key.pem
You need NVIDIA Container Toolkit for Docker
docker build -t lcm-live .
docker run -ti -p 7860:7860 --gpus all lcm-live
or with environment variables
docker run -ti -e TIMEOUT=0 -e SAFETY_CHECKER=False -p 7860:7860 --gpus all lcm-live
https://huggingface.co/spaces/radames/Real-Time-Latent-Consistency-Model