Skip to content

BentoDiffusion: A collection of diffusion models served with BentoML

License

Notifications You must be signed in to change notification settings

bentoml/BentoDiffusion

Repository files navigation

Self-host Diffusion Models with BentoML

This repository contains a series of BentoML example projects, demonstrating how to deploy different models in the Stable Diffusion (SD) family, which is specialized in generating and manipulating images or video clips based on text prompts.

See here for a full list of BentoML example projects.

The following guide uses SDXL Turbo as an example.

Prerequisites

If you want to test the Service locally, we recommend you use a Nvidia GPU with at least 12GB VRAM.

Install dependencies

git clone https://github.com/bentoml/BentoDiffusion.git
cd BentoDiffusion/sdxl-turbo

# Recommend Python 3.11
pip install -r requirements.txt

Run the BentoML Service

We have defined a BentoML Service in service.py. Run bentoml serve in your project directory to start the Service.

$ bentoml serve .

2024-01-18T18:31:49+0800 [INFO] [cli] Starting production HTTP BentoServer from "service:SDXLTurboService" listening on http://localhost:3000 (Press CTRL+C to quit)
Loading pipeline components...: 100%

The server is now active at http://localhost:3000. You can interact with it using the Swagger UI or in other different ways.

CURL

curl -X 'POST' \
  'http://localhost:3000/txt2img' \
  -H 'accept: image/*' \
  -H 'Content-Type: application/json' \
  -d '{
  "prompt": "A cinematic shot of a baby racoon wearing an intricate italian priest robe.",
  "num_inference_steps": 1,
  "guidance_scale": 0
}'

Python client

import bentoml

with bentoml.SyncHTTPClient("http://localhost:3000") as client:
        result = client.txt2img(
            prompt="A cinematic shot of a baby racoon wearing an intricate italian priest robe.",
            num_inference_steps=1,
            guidance_scale=0.0
        )

For detailed explanations of the Service code, see Stable Diffusion XL Turbo.

Deploy to BentoCloud

After the Service is ready, you can deploy the application to BentoCloud for better management and scalability. Sign up if you haven't got a BentoCloud account.

Make sure you have logged in to BentoCloud, then run the following command to deploy it.

bentoml deploy .

Once the application is up and running on BentoCloud, you can access it via the exposed URL.

Note: For custom deployment in your own infrastructure, use BentoML to generate an OCI-compliant image.

Choose another diffusion model

To deploy a different diffusion model, go to the corresponding subdirectories of this repository.