This is a minimalist container running invoke web UI. No nonsense included. Just starts the invoke web ui and exposes the port.
- Run this ComfyUI Container on runpod
- See https://www.invoke.com/ for Invoke AI Details
- This container is not created by/endorsed by invoke.
- I have just turned the installation manual at https://invoke-ai.github.io/InvokeAI/installation/manual/#walkthrough into a Dockerfile.
- Install + Use starter Models/Models from Huggin Face/Any Models via URL, e.g. from civitai
- Benchmarks on runpod on A40:
- SD1,
- Image generation wall clock time (512x512, 30 steps) <3s
- SDXL/Pony
- Image generation wall clock time (1024x1024, 30 steps) <10s
- FLUX Models
- Image generation wall clock time (1024x1024, 30 step) <30s
- SD1,
- Models, the database, outputs,... in short everything you might want to persist is stored in the volume mounted under
/workspace
/invoke/
contains e.g. the invoke ai config generated at build time- Invoke AI Web Service exposed on port 8080 (no login)
- Exposes Port 22 for scp (through runpod forwarding)
- This repository contains a script version of the manual found at https://invoke-ai.github.io/InvokeAI/installation/manual/ to install invoke ai into a fresh ubuntu image.
- This script was transformed into a Dockerfile
Run locally:
- Be sure to have nvidia stuff installed: https://docs.nvidia.com/datacenter/cloud-native/container-toolkit/latest/install-guide.html
docker run --gpus all --rm -it --name invoke -p 8080:8080 -v YOUR_LOCAL_MODEL_DIR:/workspace ghcr.io/echsecutor/gen_ai_container/invoke:main
- Models, custom nodes, outputs,... in short everything you might want to persist is stored in the volume mounted under
/workspace
Run locally:
- Be sure to have nvidia stuff installed: https://docs.nvidia.com/datacenter/cloud-native/container-toolkit/latest/install-guide.html
docker run --gpus all --rm -it --name invoke -p 8080:8080 -v YOUR_LOCAL_MODEL_DIR:/workspace ghcr.io/echsecutor/gen_ai_container/comfy:main
Or just use the provided script
Copyright 2025 Sebastian Schmittner
The Docker files/scripts in this repository is not endorsed by anyone but the author.
The Invoke AI container bundles InvokeAI, which ships under the Apache 2 License. All credit for Invoke AI go to the Invoke AI Team.
The ComfyUI Container contains ComfyUI which ships under GPLv3. All credits go to the ComfyUI team.
All code in this repository is distributed under the MIT License.