Skip to content
Merged
Show file tree
Hide file tree
Changes from all commits
Commits
Show all changes
59 commits
Select commit Hold shift + click to select a range
0c7cb9a
Flax: Ignore PyTorch, ONNX files when they coexist with Flax weights …
pcuenca Oct 2, 2023
907fd91
Fixed constants.py not using hugging face hub environment variable (#…
Zanz2 Oct 2, 2023
bbe8d3a
Compile test fixes (#5235)
DN6 Oct 2, 2023
4f74a5e
[PEFT warnings] Only sure deprecation warnings in the future (#5240)
patrickvonplaten Oct 2, 2023
2a62aad
Add docstrings in forward methods of adapter model (#5253)
Nandika-A Oct 2, 2023
db91e71
make style
patrickvonplaten Oct 2, 2023
cd1b8d7
[WIP] Refactor UniDiffuser Pipeline and Tests (#4948)
dg845 Oct 2, 2023
d56825e
fix: how print training resume logs. (#5117)
sayakpaul Oct 2, 2023
37a787a
Add docstring for the AutoencoderKL's decode (#5242)
freespirit Oct 2, 2023
7a4324c
Add a docstring for the AutoencoderKL's encode (#5239)
freespirit Oct 2, 2023
c8b0f0e
Update UniPC to support 1D diffusion. (#5199)
leng-yue Oct 2, 2023
bdd1611
[Schedulers] Fix callback steps (#5261)
patrickvonplaten Oct 2, 2023
2457599
make fix copies
patrickvonplaten Oct 2, 2023
dfcce3c
[Research folder] Add SDXL example (#5275)
patrickvonplaten Oct 3, 2023
7271f8b
Fix UniPC scheduler for 1D (#5276)
patrickvonplaten Oct 3, 2023
dd5a362
New Pipeline Slow Test runners (#5131)
DN6 Oct 4, 2023
c7e0895
handle case when controlnet is list or tuple (#5179)
noskill Oct 4, 2023
25c177a
make style
patrickvonplaten Oct 4, 2023
e46ec5f
Zh doc (#4807)
WADreaming Oct 4, 2023
8425cd4
I added a new doc string to the class. This is more flexible to under…
hisushanta Oct 5, 2023
8a0e77d
Merge branch 'main' into doc_string
hisushanta Oct 5, 2023
84b82a6
✨ [Core] Add FreeU mechanism (#5164)
kadirnar Oct 5, 2023
d8d8b2a
pin torch version (#5297)
DN6 Oct 5, 2023
e6faf60
add: entry for DDPO support. (#5250)
sayakpaul Oct 5, 2023
cf3a816
Merge branch 'main' into doc_string
hisushanta Oct 5, 2023
02a8d66
Min-SNR Gamma: correct the fix for SNR weighted loss in v-prediction …
bghira Oct 5, 2023
0922210
Update bug-report.yml
patrickvonplaten Oct 6, 2023
6ce01bd
Bump tolerance on shape test (#5289)
DN6 Oct 6, 2023
872ae1d
Add from single file to StableDiffusionUpscalePipeline and StableDiff…
DN6 Oct 6, 2023
7eaae83
[LoRA] fix: torch.compile() for lora conv (#5298)
sayakpaul Oct 6, 2023
f0a2c63
[docs] Improved inpaint docs (#5210)
stevhliu Oct 6, 2023
0168667
Minor fixes (#5309)
TimothyAlexisVass Oct 6, 2023
dd25ef5
[Hacktoberfest]Fixing issues #5241 (#5255)
jgyfutub Oct 6, 2023
306dc6e
Update README.md (#5267)
ShubhamJagtap2000 Oct 6, 2023
a0cd96f
Merge branch 'main' into doc_string
hisushanta Oct 7, 2023
746a8e8
Update src/diffusers/models/unet_2d_blocks.py
hisushanta Oct 7, 2023
6e56886
Update src/diffusers/models/unet_2d_blocks.py
hisushanta Oct 7, 2023
627fd9f
Update unet_2d_blocks.py
hisushanta Oct 7, 2023
0513a8c
fix typo in train dreambooth lora description (#5332)
themez Oct 8, 2023
ae4f7f2
Update unet_2d_blocks.py
hisushanta Oct 8, 2023
f0bea43
Update unet_2d_blocks.py
hisushanta Oct 8, 2023
872a4a5
Merge branch 'main' into doc_string
hisushanta Oct 8, 2023
6bd55b5
Fix [core/GLIGEN]: TypeError when iterating over 0-d tensor with In-p…
chuzhdontcode Oct 9, 2023
cc2c4ae
fix inference in custom diffusion (#5329)
caopulan Oct 9, 2023
2ed7e05
Improve performance of fast test by reducing down blocks (#5290)
sepal Oct 9, 2023
c4d6620
make-fast-test-for-StableDiffusionControlNetPipeline-faster (#5292)
m0saan Oct 9, 2023
3546f6d
I run the black command to reformat style in the code
hisushanta Oct 9, 2023
12534f4
Merge branch 'main' into doc_string
hisushanta Oct 9, 2023
bd72927
Improve typehints and docs in `diffusers/models` (#5299)
a-r-r-o-w Oct 9, 2023
e2c0208
Add py.typed for PEP 561 compliance (#5326)
byarbrough Oct 9, 2023
8d314c9
[HacktoberFest] Add missing docstrings to diffusers/models (#5248)
a-r-r-o-w Oct 9, 2023
d199bc6
make style
patrickvonplaten Oct 9, 2023
35952e6
Fix links in docs to adapter code (#5323)
johnowhitaker Oct 9, 2023
a844065
replace references to deprecated KeyArray & PRNGKeyArray (#5324)
jakevdp Oct 9, 2023
ed2f956
Fix loading broken LoRAs that could give NaN (#5316)
patrickvonplaten Oct 9, 2023
4ac205e
[JAX] Replace uses of `jnp.array` in types with `jnp.ndarray`. (#4719)
hvaara Oct 9, 2023
d3e0750
Add missing dependency in requirements file (#5345)
juliensimon Oct 10, 2023
9c82b68
fix problem of 'accelerator.is_main_process' to run in mutiple GPUs (…
jiaqiw09 Oct 10, 2023
48afb4b
Merge branch 'main' into doc_string
hisushanta Oct 10, 2023
File filter

Filter by extension

Filter by extension


Conversations
Failed to load comments.
Loading
Jump to
Jump to file
Failed to load files.
Loading
Diff view
Diff view
45 changes: 35 additions & 10 deletions .github/ISSUE_TEMPLATE/bug-report.yml
Original file line number Diff line number Diff line change
Expand Up @@ -13,7 +13,7 @@ body:
*Give your issue a fitting title. Assume that someone which very limited knowledge of diffusers can understand your issue. Add links to the source code, documentation other issues, pull requests etc...*
- 2. If your issue is about something not working, **always** provide a reproducible code snippet. The reader should be able to reproduce your issue by **only copy-pasting your code snippet into a Python shell**.
*The community cannot solve your issue if it cannot reproduce it. If your bug is related to training, add your training script and make everything needed to train public. Otherwise, just add a simple Python code snippet.*
- 3. Add the **minimum amount of code / context that is needed to understand, reproduce your issue**.
- 3. Add the **minimum** amount of code / context that is needed to understand, reproduce your issue.
*Make the life of maintainers easy. `diffusers` is getting many issues every day. Make sure your issue is about one bug and one bug only. Make sure you add only the context, code needed to understand your issues - nothing more. Generally, every issue is a way of documenting this library, try to make it a good documentation entry.*
- 4. For issues related to community pipelines (i.e., the pipelines located in the `examples/community` folder), please tag the author of the pipeline in your issue thread as those pipelines are not maintained.
- type: markdown
Expand Down Expand Up @@ -61,21 +61,46 @@ body:
All issues are read by one of the core maintainers, so if you don't know who to tag, just leave this blank and
a core maintainer will ping the right person.

Please tag fewer than 3 people.

General library related questions: @patrickvonplaten and @sayakpaul
Please tag a maximum of 2 people.

Questions on DiffusionPipeline (Saving, Loading, From pretrained, ...):

Questions on pipelines:
- Stable Diffusion @yiyixuxu @DN6 @patrickvonplaten @sayakpaul @patrickvonplaten
- Stable Diffusion XL @yiyixuxu @sayakpaul @DN6 @patrickvonplaten
- Kandinsky @yiyixuxu @patrickvonplaten
- ControlNet @sayakpaul @yiyixuxu @DN6 @patrickvonplaten
- T2I Adapter @sayakpaul @yiyixuxu @DN6 @patrickvonplaten
- IF @DN6 @patrickvonplaten
- Text-to-Video / Video-to-Video @DN6 @sayakpaul @patrickvonplaten
- Wuerstchen @DN6 @patrickvonplaten
- Other: @yiyixuxu @DN6

Questions on models:
- UNet @DN6 @yiyixuxu @sayakpaul @patrickvonplaten
- VAE @sayakpaul @DN6 @yiyixuxu @patrickvonplaten
- Transformers/Attention @DN6 @yiyixuxu @sayakpaul @DN6 @patrickvonplaten

Questions on the training examples: @williamberman, @sayakpaul, @yiyixuxu
Questions on Schedulers: @yiyixuxu @patrickvonplaten

Questions on memory optimizations, LoRA, float16, etc.: @williamberman, @patrickvonplaten, and @sayakpaul
Questions on LoRA: @sayakpaul @patrickvonplaten

Questions on schedulers: @patrickvonplaten and @williamberman
Questions on Textual Inversion: @sayakpaul @patrickvonplaten

Questions on models and pipelines: @patrickvonplaten, @sayakpaul, and @williamberman (for community pipelines, please tag the original author of the pipeline)
Questions on Training:
- DreamBooth @sayakpaul @patrickvonplaten
- Text-to-Image Fine-tuning @sayakpaul @patrickvonplaten
- Textual Inversion @sayakpaul @patrickvonplaten
- ControlNet @sayakpaul @patrickvonplaten

Questions on Tests: @DN6 @sayakpaul @yiyixuxu

Questions on Documentation: @stevhliu

Questions on JAX- and MPS-related things: @pcuenca

Questions on audio pipelines: @patrickvonplaten, @kashif, and @sanchit-gandhi
Questions on audio pipelines: @DN6 @patrickvonplaten



Documentation: @stevhliu and @yiyixuxu
placeholder: "@Username ..."
233 changes: 184 additions & 49 deletions .github/workflows/push_tests.yml
Original file line number Diff line number Diff line change
@@ -1,64 +1,127 @@
name: Slow tests on main
name: Slow Tests on main

on:
push:
branches:
- main


env:
DIFFUSERS_IS_CI: yes
HF_HOME: /mnt/cache
OMP_NUM_THREADS: 8
MKL_NUM_THREADS: 8
PYTEST_TIMEOUT: 600
RUN_SLOW: yes
PIPELINE_USAGE_CUTOFF: 50000

jobs:
run_slow_tests:
setup_torch_cuda_pipeline_matrix:
name: Setup Torch Pipelines CUDA Slow Tests Matrix
runs-on: docker-gpu
container:
image: diffusers/diffusers-pytorch-cpu # this is a CPU image, but we need it to fetch the matrix
options: --shm-size "16gb" --ipc host
outputs:
pipeline_test_matrix: ${{ steps.fetch_pipeline_matrix.outputs.pipeline_test_matrix }}
steps:
- name: Checkout diffusers
uses: actions/checkout@v3
with:
fetch-depth: 2
- name: Install dependencies
run: |
apt-get update && apt-get install libsndfile1-dev libgl1 -y
python -m pip install -e .[quality,test]
python -m pip install git+https://github.com/huggingface/accelerate.git

- name: Environment
run: |
python utils/print_env.py

- name: Fetch Pipeline Matrix
id: fetch_pipeline_matrix
run: |
matrix=$(python utils/fetch_torch_cuda_pipeline_test_matrix.py)
echo $matrix
echo "pipeline_test_matrix=$matrix" >> $GITHUB_OUTPUT

- name: Pipeline Tests Artifacts
if: ${{ always() }}
uses: actions/upload-artifact@v2
with:
name: test-pipelines.json
path: reports

torch_pipelines_cuda_tests:
name: Torch Pipelines CUDA Slow Tests
needs: setup_torch_cuda_pipeline_matrix
strategy:
fail-fast: false
max-parallel: 1
matrix:
config:
- name: Slow PyTorch CUDA tests on Ubuntu
framework: pytorch
runner: docker-gpu
image: diffusers/diffusers-pytorch-cuda
report: torch_cuda
- name: Slow Flax TPU tests on Ubuntu
framework: flax
runner: docker-tpu
image: diffusers/diffusers-flax-tpu
report: flax_tpu
- name: Slow ONNXRuntime CUDA tests on Ubuntu
framework: onnxruntime
runner: docker-gpu
image: diffusers/diffusers-onnxruntime-cuda
report: onnx_cuda

name: ${{ matrix.config.name }}

runs-on: ${{ matrix.config.runner }}

module: ${{ fromJson(needs.setup_torch_cuda_pipeline_matrix.outputs.pipeline_test_matrix) }}
runs-on: docker-gpu
container:
image: ${{ matrix.config.image }}
options: --shm-size "16gb" --ipc host -v /mnt/hf_cache:/mnt/cache/ ${{ matrix.config.runner == 'docker-tpu' && '--privileged' || '--gpus 0'}}

image: diffusers/diffusers-pytorch-cuda
options: --shm-size "16gb" --ipc host -v /mnt/hf_cache:/mnt/cache/ --gpus 0
steps:
- name: Checkout diffusers
uses: actions/checkout@v3
with:
fetch-depth: 2
- name: NVIDIA-SMI
run: |
nvidia-smi
- name: Install dependencies
run: |
apt-get update && apt-get install libsndfile1-dev libgl1 -y
python -m pip install -e .[quality,test]
python -m pip install git+https://github.com/huggingface/accelerate.git
- name: Environment
run: |
python utils/print_env.py
- name: Slow PyTorch CUDA checkpoint tests on Ubuntu
env:
HUGGING_FACE_HUB_TOKEN: ${{ secrets.HUGGING_FACE_HUB_TOKEN }}
# https://pytorch.org/docs/stable/notes/randomness.html#avoiding-nondeterministic-algorithms
CUBLAS_WORKSPACE_CONFIG: :16:8
run: |
python -m pytest -n 1 --max-worker-restart=0 --dist=loadfile \
-s -v -k "not Flax and not Onnx" \
--make-reports=tests_pipeline_${{ matrix.module }}_cuda \
tests/pipelines/${{ matrix.module }}
- name: Failure short reports
if: ${{ failure() }}
run: |
cat reports/tests_pipeline_${{ matrix.module }}_cuda_stats.txt
cat reports/tests_pipeline_${{ matrix.module }}_cuda_failures_short.txt

- name: Test suite reports artifacts
if: ${{ always() }}
uses: actions/upload-artifact@v2
with:
name: pipeline_${{ matrix.module }}_test_reports
path: reports

torch_cuda_tests:
name: Torch CUDA Tests
runs-on: docker-gpu
container:
image: diffusers/diffusers-pytorch-cuda
options: --shm-size "16gb" --ipc host -v /mnt/hf_cache:/mnt/cache/ --gpus 0
defaults:
run:
shell: bash

strategy:
matrix:
module: [models, schedulers, lora, others]
steps:
- name: Checkout diffusers
uses: actions/checkout@v3
with:
fetch-depth: 2

- name: NVIDIA-SMI
if : ${{ matrix.config.runner == 'docker-gpu' }}
run: |
nvidia-smi

- name: Install dependencies
run: |
apt-get update && apt-get install libsndfile1-dev libgl1 -y
Expand All @@ -70,47 +133,121 @@ jobs:
python utils/print_env.py

- name: Run slow PyTorch CUDA tests
if: ${{ matrix.config.framework == 'pytorch' }}
env:
HUGGING_FACE_HUB_TOKEN: ${{ secrets.HUGGING_FACE_HUB_TOKEN }}
# https://pytorch.org/docs/stable/notes/randomness.html#avoiding-nondeterministic-algorithms
CUBLAS_WORKSPACE_CONFIG: :16:8

run: |
python -m pytest -n 1 --max-worker-restart=0 --dist=loadfile \
-s -v -k "not Flax and not Onnx and not compile" \
--make-reports=tests_${{ matrix.config.report }} \
tests/
-s -v -k "not Flax and not Onnx" \
--make-reports=tests_torch_cuda \
tests/${{ matrix.module }}

- name: Failure short reports
if: ${{ failure() }}
run: |
cat reports/tests_torch_cuda_stats.txt
cat reports/tests_torch_cuda_failures_short.txt

- name: Test suite reports artifacts
if: ${{ always() }}
uses: actions/upload-artifact@v2
with:
name: torch_cuda_test_reports
path: reports

flax_tpu_tests:
name: Flax TPU Tests
runs-on: docker-tpu
container:
image: diffusers/diffusers-flax-tpu
options: --shm-size "16gb" --ipc host -v /mnt/hf_cache:/mnt/cache/ --privileged
defaults:
run:
shell: bash
steps:
- name: Checkout diffusers
uses: actions/checkout@v3
with:
fetch-depth: 2

- name: Install dependencies
run: |
apt-get update && apt-get install libsndfile1-dev libgl1 -y
python -m pip install -e .[quality,test]
python -m pip install git+https://github.com/huggingface/accelerate.git

- name: Environment
run: |
python utils/print_env.py

- name: Run slow Flax TPU tests
if: ${{ matrix.config.framework == 'flax' }}
env:
HUGGING_FACE_HUB_TOKEN: ${{ secrets.HUGGING_FACE_HUB_TOKEN }}
run: |
python -m pytest -n 0 \
-s -v -k "Flax" \
--make-reports=tests_${{ matrix.config.report }} \
--make-reports=tests_flax_tpu \
tests/

- name: Failure short reports
if: ${{ failure() }}
run: |
cat reports/tests_flax_tpu_stats.txt
cat reports/tests_flax_tpu_failures_short.txt

- name: Test suite reports artifacts
if: ${{ always() }}
uses: actions/upload-artifact@v2
with:
name: flax_tpu_test_reports
path: reports

onnx_cuda_tests:
name: ONNX CUDA Tests
runs-on: docker-gpu
container:
image: diffusers/diffusers-onnxruntime-cuda
options: --shm-size "16gb" --ipc host -v /mnt/hf_cache:/mnt/cache/ --gpus 0
defaults:
run:
shell: bash
steps:
- name: Checkout diffusers
uses: actions/checkout@v3
with:
fetch-depth: 2

- name: Install dependencies
run: |
apt-get update && apt-get install libsndfile1-dev libgl1 -y
python -m pip install -e .[quality,test]
python -m pip install git+https://github.com/huggingface/accelerate.git

- name: Environment
run: |
python utils/print_env.py

- name: Run slow ONNXRuntime CUDA tests
if: ${{ matrix.config.framework == 'onnxruntime' }}
env:
HUGGING_FACE_HUB_TOKEN: ${{ secrets.HUGGING_FACE_HUB_TOKEN }}
run: |
python -m pytest -n 1 --max-worker-restart=0 --dist=loadfile \
-s -v -k "Onnx" \
--make-reports=tests_${{ matrix.config.report }} \
--make-reports=tests_onnx_cuda \
tests/

- name: Failure short reports
if: ${{ failure() }}
run: cat reports/tests_${{ matrix.config.report }}_failures_short.txt
run: |
cat reports/tests_onnx_cuda_stats.txt
cat reports/tests_onnx_cuda_failures_short.txt

- name: Test suite reports artifacts
if: ${{ always() }}
uses: actions/upload-artifact@v2
with:
name: ${{ matrix.config.report }}_test_reports
name: onnx_cuda_test_reports
path: reports

run_torch_compile_tests:
Expand All @@ -131,21 +268,17 @@ jobs:
- name: NVIDIA-SMI
run: |
nvidia-smi

- name: Install dependencies
run: |
python -m pip install -e .[quality,test,training]

- name: Environment
run: |
python utils/print_env.py

- name: Run example tests on GPU
env:
HUGGING_FACE_HUB_TOKEN: ${{ secrets.HUGGING_FACE_HUB_TOKEN }}
run: |
python -m pytest -n 1 --max-worker-restart=0 --dist=loadfile -s -v -k "compile" --make-reports=tests_torch_compile_cuda tests/

- name: Failure short reports
if: ${{ failure() }}
run: cat reports/tests_torch_compile_cuda_failures_short.txt
Expand Down Expand Up @@ -192,11 +325,13 @@ jobs:

- name: Failure short reports
if: ${{ failure() }}
run: cat reports/examples_torch_cuda_failures_short.txt
run: |
cat reports/examples_torch_cuda_stats.txt
cat reports/examples_torch_cuda_failures_short.txt

- name: Test suite reports artifacts
if: ${{ always() }}
uses: actions/upload-artifact@v2
with:
name: examples_test_reports
path: reports
path: reports
Loading