Skip to content

Running Containerized (Docker)

Benjamin Paine edited this page Jan 26, 2024 · 5 revisions

Since Enfugue requires a GPU for effective operation, your host machine must have a GPU and be able to host GPU-accelerated Docker containers. At the current moment, this is only available using the Nvidia Container Toolkit, available for Linux machines. You must install this toolkit and then restart your the Docker daemon before you can launch Enfugue with GPU acceleration.

The containerized version includes TensorRT support.

Pulling the Container

The container is available directly off the Github Container Repository, it can be pulled like this:

docker pull ghcr.io/painebenjamin/app.enfugue.ai:latest

Testing Capabilities

To check if the container is working and can communicate with your GPU, you can run the version command in the container.

docker run --rm --gpus all --runtime nvidia ghcr.io/painebenjamin/app.enfugue.ai:latest version                         

This is the expected result:

Enfugue v.0.2.1
Torch v.1.13.1+cu117

AI/ML Capabilities:
---------------------
Device type: cuda
CUDA: Ready
TensorRT: Ready
DirectML: Unavailable
MPS: Unavailable

Running

The basic run command is:

docker run --rm --gpus all --runtime nvidia -v ${{ YOUR CACHE DIRECTORY }}:/home/enfugue/.cache -p 45555:45555 ghcr.io/painebenjamin/app.enfugue.ai:latest run

What does this command do?

  1. Passes the run command to the docker command line tool.
  2. Includes --rm to remove any other running ENFUGUE container.
  3. Includes --gpus all to let the docker container see attached graphics cards.
  4. Includes --runtime nvidia to force Docker to use the Nvidia (GPU-capable) runtime.
  5. Includes -v ${{ YOUR_CACHE_DIRECTORY }}:/home/enfugue/.cache to mount the passed directory to ENFUGUES cache directory, which is the parent directory where Enfugue looks for files, downloads checkpoints, etc. The directories can also be changed in the UI as needed to anything else. This is not necessary, but is recommended if you intend to use many different AI models.
  6. Includes -p 45555:45555 to bind the local port 45555 to the container port 45555, ENFUGUE's HTTP listening port.
  7. Uses the image ghcr.io/painebenjamin/app.enfugue.ai:latest
  8. Issues the run command to the image's executable entrypoint.

You'll then be able to access to UI at http://localhost:45555. See below for information on changing the port or domain.

Networking and Configuration

See Configuration for Advanced Users on how to use configuration files. You will need to ensure any configuration file passed can be read by the Docker container - this guide uses environment variables for ease-of-use.

Use a combination of ENFUGUE_SERVER_SECURE, ENFUGUE_SERVER_DOMAIN and ENFUGUE_SERVER_PORT to control how Enfugue assembles URLs and listens to requests.

  • Set ENFUGUE_SERVER_DOMAIN to your desired domain name or IP address.
  • Set ENFUGUE_SERVER_PORT to the desired port to listen to, or default to 45555 for HTTP.
  • Set ENFUGUE_SERVER_SECURE to 1/True/Yes to use HTTPS, or anything else for HTTP.
    • If you want to use HTTPS and aren't using app.enfugue.ai, you'll need to provide your own certificate with ENFUGUE_SERVER_CERT and key with ENFUGUE_SERVER_KEY. You can also provide your own chain with ENFUGUE_SERVER_CHAIN.