Skip to content

docs: update diffusers multi-GPU documentation to mention tensor_parallel_size configuration#8621

Merged
mudler merged 2 commits intomudler:masterfrom
localai-bot:feature/update-docs-diffusers-multi-gpu
Feb 22, 2026
Merged

docs: update diffusers multi-GPU documentation to mention tensor_parallel_size configuration#8621
mudler merged 2 commits intomudler:masterfrom
localai-bot:feature/update-docs-diffusers-multi-gpu

Conversation

@localai-bot
Copy link
Contributor

This PR updates the documentation for diffusers multi-GPU support to clearly explain the tensor_parallel_size configuration parameter.

Changes

backend/python/diffusers/README.md

  • Added a new section explaining multi-GPU configuration with
  • Documented the YAML configuration format and gRPC proto integration
  • Included notes about GPU memory requirements and optimal setup

docs/content/features/GPU-acceleration.md

  • Reorganized the multi-GPU section to be more prominent
  • Fixed YAML formatting for multi-GPU configuration
  • Clarified that tensor_parallel_size is set in the gRPC proto configuration (field 55 in ModelOptions)

@netlify
Copy link

netlify bot commented Feb 22, 2026

Deploy Preview for localai ready!

Name Link
🔨 Latest commit 1d6f18e
🔍 Latest deploy log https://app.netlify.com/projects/localai/deploys/699b39d00b16b200080b3a9d
😎 Deploy Preview https://deploy-preview-8621--localai.netlify.app
📱 Preview on mobile
Toggle QR Code...

QR Code

Use your smartphone camera to open QR code link.

To edit notification comments on pull requests, go to your Netlify project configuration.

@localai-bot localai-bot marked this pull request as ready for review February 22, 2026 13:19
@github-actions github-actions bot added the enhancement New feature or request label Feb 22, 2026
@github-actions github-actions bot enabled auto-merge (squash) February 22, 2026 13:20
github-actions[bot]
github-actions bot previously approved these changes Feb 22, 2026
auto-merge was automatically disabled February 22, 2026 13:23

Head branch was pushed to by a user without write access

@localai-bot localai-bot force-pushed the feature/update-docs-diffusers-multi-gpu branch from 3685bc4 to e24d4bf Compare February 22, 2026 13:23
@github-actions github-actions bot enabled auto-merge (squash) February 22, 2026 13:23
github-actions[bot]
github-actions bot previously approved these changes Feb 22, 2026
auto-merge was automatically disabled February 22, 2026 13:31

Head branch was pushed to by a user without write access

@localai-bot localai-bot force-pushed the feature/update-docs-diffusers-multi-gpu branch from e24d4bf to 6bb1064 Compare February 22, 2026 13:31
github-actions[bot]
github-actions bot previously approved these changes Feb 22, 2026
@github-actions github-actions bot enabled auto-merge (squash) February 22, 2026 13:31
auto-merge was automatically disabled February 22, 2026 13:40

Head branch was pushed to by a user without write access

@localai-bot localai-bot force-pushed the feature/update-docs-diffusers-multi-gpu branch from 6bb1064 to e47f9d8 Compare February 22, 2026 13:40
@github-actions github-actions bot enabled auto-merge (squash) February 22, 2026 13:41
@localai-bot
Copy link
Contributor Author

I've updated the documentation to clearly explain the multi-GPU configuration for diffusers. The changes include:

Summary of Changes

1. backend/python/diffusers/README.md

Added a new section explaining multi-GPU configuration with :

  • YAML configuration example
  • gRPC proto integration details (field 55 in ModelOptions)
  • Notes about GPU memory requirements and optimal setup

2. docs/content/features/GPU-acceleration.md

Reorganized the multi-GPU section to be more prominent:

  • Fixed YAML formatting for multi-GPU configuration
  • Clarified that is set in the gRPC proto configuration
  • Added detailed notes about GPU memory requirements and optimal setup

How to Use Multi-GPU with Diffusers

To use diffusers with multiple GPUs, configure your model with set to the number of GPUs you want to use:

When , the backend automatically enables to distribute the model across multiple GPUs. This works with all diffusers pipelines (text-to-image, image-to-image, inpainting, etc.).

For more details, see:

auto-merge was automatically disabled February 22, 2026 17:15

Head branch was pushed to by a user without write access

@github-actions github-actions bot enabled auto-merge (squash) February 22, 2026 17:16
@mudler mudler disabled auto-merge February 22, 2026 17:17
@mudler mudler merged commit 559ab99 into mudler:master Feb 22, 2026
37 of 38 checks passed
@mudler mudler added kind/documentation Improvements or additions to documentation and removed enhancement New feature or request labels Feb 22, 2026
Sign up for free to join this conversation on GitHub. Already have an account? Sign in to comment

Labels

kind/documentation Improvements or additions to documentation

Projects

None yet

Development

Successfully merging this pull request may close these issues.

2 participants