Skip to content
Open
Show file tree
Hide file tree
Changes from all commits
Commits
Show all changes
281 commits
Select commit Hold shift + click to select a range
2d35f67
fix a small typo in pipeline_ddpm.py (#948)
chenguolin Oct 24, 2022
2c82e0c
Reorganize pipeline tests (#963)
anton-l Oct 24, 2022
8aac1f9
v1-5 docs updates (#921)
apolinario Oct 24, 2022
2fb8faf
add community pipeline docs; add minimal text to some empty doc pages…
Oct 24, 2022
8204415
Fix typo: `torch_type` -> `torch_dtype` (#972)
pcuenca Oct 25, 2022
6e099e2
add num_inference_steps arg to DDPM (#935)
tmabraham Oct 25, 2022
38ae5a2
Add Composable diffusion to community pipeline examples (#951)
MarkRich Oct 25, 2022
240abdd
[Flax] added broadcast_to_shape_from_left helper and Scheduler tests …
kashif Oct 25, 2022
28b134e
[Tests] Fix `mps` reproducibility issue when running with pytest-xdis…
anton-l Oct 25, 2022
3d02c92
mps changes for PyTorch 1.13 (#926)
pcuenca Oct 25, 2022
0b42b07
[Onnx] support half-precision and fix bugs for onnx pipelines (#932)
SkyTNT Oct 25, 2022
88fa6b7
[Dance Diffusion] Add dance diffusion (#803)
patrickvonplaten Oct 25, 2022
365ff8f
[Dance Diffusion] FP16 (#980)
patrickvonplaten Oct 25, 2022
59f0ce8
[Dance Diffusion] Better naming (#981)
patrickvonplaten Oct 25, 2022
e2243de
Fix typo in documentation title (#975)
echarlaix Oct 25, 2022
4b9f589
Add --pretrained_model_name_revision option to train_dreambooth.py (#…
shirayu Oct 25, 2022
0343d8f
Do not use torch.float64 on the mps device (#942)
pcuenca Oct 26, 2022
d9cfe32
CompVis -> diffusers script - allow converting from merged checkpoint…
patrickvonplaten Oct 26, 2022
d7d6841
fix a bug in the new version (#957)
xiaohu2015 Oct 26, 2022
cc43608
Fix typos (#978)
shirayu Oct 26, 2022
2f0fcf4
Add missing import (#979)
juliensimon Oct 26, 2022
b2e2d14
minimal stable diffusion GPU memory usage with accelerate hooks (#850)
piEsposito Oct 26, 2022
bd06dd0
[inpaint pipeline] fix bug for multiple prompts inputs (#959)
xiaohu2015 Oct 26, 2022
8332c1a
Enable multi-process DataLoader for dreambooth (#950)
skirsten Oct 26, 2022
d3d22ce
Small modification to enable usage by external scripts (#956)
briancw Oct 26, 2022
a23ad87
[Flax] Add Textual Inversion (#880)
duongna21 Oct 26, 2022
1d04e1b
Continuation of #942: additional float64 failure (#996)
pcuenca Oct 27, 2022
e92a603
fix dreambooth script. (#1017)
patil-suraj Oct 27, 2022
3be9fa9
[Accelerate model loading] Fix meta device and super low memory usage…
patrickvonplaten Oct 27, 2022
abe0582
[Flax] Add finetune Stable Diffusion (#999)
duongna21 Oct 27, 2022
4623f09
[DreamBooth] Set train mode for text encoder (#1012)
duongna21 Oct 27, 2022
90f91ad
[Flax] Add DreamBooth (#1001)
duongna21 Oct 27, 2022
fbcc383
Deprecate `init_git_repo`, refactor `train_unconditional.py` (#1022)
anton-l Oct 27, 2022
52f2128
update readme for flax examples (#1026)
patil-suraj Oct 27, 2022
eceeebd
Update train_dreambooth.py
patil-suraj Oct 27, 2022
939ec17
Probably nicer to specify dependency on tensorboard in the training e…
lukovnikov Oct 27, 2022
a6314a8
Add `--dataloader_num_workers` to the DDPM training example (#1027)
anton-l Oct 27, 2022
de00c63
Document sequential CPU offload method on Stable Diffusion pipeline (…
piEsposito Oct 27, 2022
fb38bb1
Support grayscale images in `numpy_to_pil` (#1025)
anton-l Oct 27, 2022
1e07b6b
[Flax SD finetune] Fix dtype (#1038)
duongna21 Oct 28, 2022
ab079f2
fix `F.interpolate()` for large batch sizes (#1006)
NouamaneTazi Oct 28, 2022
a80480f
[Tests] Improve unet / vae tests (#1018)
patrickvonplaten Oct 28, 2022
d2d9764
[Tests] Speed up slow tests (#1040)
patrickvonplaten Oct 28, 2022
8d6487f
Fix some failing tests (#1041)
patrickvonplaten Oct 28, 2022
c4ef1ef
[Tests] Better prints (#1043)
patrickvonplaten Oct 28, 2022
d37f08d
[Tests] no random latents anymore (#1045)
patrickvonplaten Oct 28, 2022
cbbb293
hot fix
patrickvonplaten Oct 28, 2022
ea01a4c
fix
patrickvonplaten Oct 28, 2022
a7ae808
increase tolerance
patrickvonplaten Oct 28, 2022
81b6fbf
higher precision for vae
patrickvonplaten Oct 28, 2022
6b185b6
Update training and fine-tuning docs (#1020)
pcuenca Oct 28, 2022
fc0ca47
Fix speedup ratio in fp16.mdx (#837)
mwbyeon Oct 29, 2022
12fd073
clean incomplete pages (#1008)
Oct 29, 2022
1fc2088
Add seed resizing to community pipelines (#1011)
MarkRich Oct 29, 2022
a59f999
Tests: upgrade PyTorch cuda to 11.7 to fix examples tests. (#1048)
pcuenca Oct 29, 2022
95414bd
Experimental: allow fp16 in `mps` (#961)
pcuenca Oct 29, 2022
8e4fd68
Move safety detection to model call in Flax safety checker (#1023)
jonatanklosko Oct 30, 2022
707b868
fix slow test
patrickvonplaten Oct 31, 2022
82d56cf
Merge branch 'main' of https://github.com/huggingface/diffusers into …
patrickvonplaten Oct 31, 2022
1606eb9
Fix pipelines user_agent, ignore CI requests (#1058)
anton-l Oct 31, 2022
e4d264e
[GitBot] Automatically close issues after inactivitiy (#1079)
patrickvonplaten Oct 31, 2022
bf7b0bc
Allow `safety_checker` to be `None` when using CPU offload (#1078)
pcuenca Oct 31, 2022
a1ea8c0
k-diffusion-euler (#1019)
hlky Oct 31, 2022
c18941b
[Better scheduler docs] Improve usage examples of schedulers (#890)
patrickvonplaten Oct 31, 2022
010bc4e
incorrect model id
patrickvonplaten Oct 31, 2022
17c2c06
[Tests] Fix slow tests (#1087)
patrickvonplaten Oct 31, 2022
888468d
Remove nn sequential (#1086)
patrickvonplaten Oct 31, 2022
7fb4b88
Remove some unused parameter in CrossAttnUpBlock2D (#1034)
LaurentMazare Oct 31, 2022
a793b1f
Add imagic to community pipelines (#958)
MarkRich Nov 1, 2022
98c4213
Up to 2x speedup on GPUs using memory efficient attention (#532)
MatthieuToulemont Nov 2, 2022
8608795
[docs] add euler scheduler in docs, how to use differnet schedulers …
patil-suraj Nov 2, 2022
8ee2191
Integration tests precision improvement for inpainting (#1052)
Lewington-pitsos Nov 2, 2022
bdbcaa9
lpw_stable_diffusion: Add is_cancelled_callback (#1053)
irgolic Nov 2, 2022
d53ffbb
Rename latent (#1102)
patrickvonplaten Nov 2, 2022
0025626
fix typo in examples dreambooth README.md (#1073)
jorahn Nov 2, 2022
b1ec61e
fix model card url in text inversion readme. (#1103)
patil-suraj Nov 2, 2022
4e59bcc
[CI] Framework and hardware-specific CI tests (#997)
anton-l Nov 2, 2022
1216a3b
Fix a small typo of a variable name (#1063)
omihub777 Nov 2, 2022
5cd29d6
Fix tests for equivalence of DDIM and DDPM pipelines (#1069)
sgrigory Nov 2, 2022
33c4874
Fix padding in dreambooth (#1030)
shirayu Nov 2, 2022
0b61cea
[Flax] time embedding (#1081)
kashif Nov 2, 2022
cbcd051
Training to predict x0 in training example (#1031)
lukovnikov Nov 2, 2022
c39a511
[Loading] Ignore unneeded files (#1107)
patrickvonplaten Nov 2, 2022
0edf9ca
Fix hub-dependent tests for PRs (#1119)
anton-l Nov 3, 2022
4a38166
Allow saving `None` pipeline components (#1118)
anton-l Nov 3, 2022
d38c804
feat: add repaint (#974)
Revist Nov 3, 2022
269109d
Continuation of #1035 (#1120)
pcuenca Nov 3, 2022
ef2ea33
VQ-diffusion (#658)
williamberman Nov 3, 2022
7482178
default fast model loading 🔥 (#1115)
patil-suraj Nov 3, 2022
988c822
fix copies
patrickvonplaten Nov 3, 2022
42bb459
[Low cpu memory] Correct naming and improve default usage (#1122)
patrickvonplaten Nov 3, 2022
7b030a7
handle device for randn in euler step (#1124)
patil-suraj Nov 3, 2022
118c5be
Docs: Do not require PyTorch nightlies (#1123)
pcuenca Nov 3, 2022
1578679
Release: v0.7.0
anton-l Nov 3, 2022
33108bf
Correct VQDiffusion Pipeline import
patrickvonplaten Nov 3, 2022
9eb389f
Merge branch 'main' of https://github.com/huggingface/diffusers into …
patrickvonplaten Nov 3, 2022
a24862c
Correct VQDiffusion Pipeline import
patrickvonplaten Nov 3, 2022
bde4880
make style
patrickvonplaten Nov 3, 2022
c62b3a2
[Flax] Fix sample batch size DreamBooth (#1129)
duongna21 Nov 4, 2022
1d0f3c2
Move accelerate to a soft-dependency (#1134)
patrickvonplaten Nov 4, 2022
af7b1c3
fix 404 link in example/README.mb (#1136)
webbigdata-jp Nov 4, 2022
2c10869
Test precision increases (#1113)
Lewington-pitsos Nov 4, 2022
5b20d3b
fix the parameter naming in `self.downsamplers` (#1108)
chenguolin Nov 4, 2022
a480229
[Community Pipeline] lpw_stable_diffusion: add xformers_memory_effici…
SkyTNT Nov 4, 2022
2fcae69
Bump to 0.8.0.dev0 (#1131)
anton-l Nov 4, 2022
1172c96
add enable sequential cpu offloading to other stable diffusion pipeli…
piEsposito Nov 4, 2022
9d8943b
Add CycleDiffusion pipeline using Stable Diffusion (#888)
ChenWu98 Nov 4, 2022
08a6dc8
Flax: Flip sin to cos in time embeddings (#1149)
pcuenca Nov 5, 2022
b4a1ed8
Add multistep DPM-Solver discrete scheduler (#1132)
LuChengTHU Nov 6, 2022
e86a280
Remove warning about half precision on MPS (#1163)
pcuenca Nov 7, 2022
cd502b2
Fix typo latens -> latents (#1171)
duongna21 Nov 7, 2022
0dd8c6b
Fix community pipeline links (#1162)
pcuenca Nov 7, 2022
b500df1
[Docs] Add loading script (#1174)
patrickvonplaten Nov 7, 2022
de75362
fix image docs
patrickvonplaten Nov 7, 2022
72eae64
Fix dtype safety checker inpaint legacy (#1137)
patrickvonplaten Nov 7, 2022
bcdb3d5
Community pipeline img2img inpainting (#1114)
vvvm23 Nov 7, 2022
0173323
[Community Pipeline] Add multilingual stable diffusion to community p…
juancopi81 Nov 7, 2022
ac4c695
[Flax examples] Load text encoder from subfolder (#1147)
duongna21 Nov 7, 2022
fa6e520
Link to Dreambooth blog post instead of W&B report (#1180)
pcuenca Nov 7, 2022
c3dcb67
Update config.yml
patrickvonplaten Nov 8, 2022
20a05d6
Fix small typo (#1178)
pcuenca Nov 8, 2022
5a8b356
[DDIMScheduler] fix noise device in ddim step (#1189)
patil-suraj Nov 8, 2022
813744e
MPS schedulers: don't use float64 (#1169)
pcuenca Nov 8, 2022
555203e
Warning for invalid options without "--with_prior_preservation" (#1065)
shirayu Nov 8, 2022
11f7d6f
[ONNX] Improve ONNXPipeline scheduler compatibility, fix safety_check…
anton-l Nov 8, 2022
614c182
Restore compatibility with deprecated `StableDiffusionOnnxPipeline` (…
pcuenca Nov 8, 2022
32b0736
Update pr docs actions (#1194)
Nov 8, 2022
5786b0e
handle dtype xformers attention (#1196)
patil-suraj Nov 8, 2022
249d9bc
[Scheduler] Move predict epsilon to init (#1155)
patrickvonplaten Nov 8, 2022
598ff76
add licenses to pipelines (#1201)
Nov 9, 2022
24895a1
Fix cpu offloading (#1177)
anton-l Nov 9, 2022
6cf72a9
Fix slow tests (#1210)
patrickvonplaten Nov 9, 2022
663f0c1
[Flax] fix extra copy pasta 🍝 (#1187)
camenduru Nov 9, 2022
cd77a03
[CLIPGuidedStableDiffusion] support DDIM scheduler (#1190)
patil-suraj Nov 9, 2022
3f7edc5
Fix layer names convert LDM script (#1206)
duongna21 Nov 9, 2022
b93fe08
[Loading] Make sure loading edge cases work (#1192)
patrickvonplaten Nov 9, 2022
5a59f9b
Add LDM Super Resolution pipeline (#1116)
duongna21 Nov 9, 2022
0248541
[Conversion] Improve conversion script (#1218)
patrickvonplaten Nov 9, 2022
6c0335c
DDIM docs (#1219)
patrickvonplaten Nov 9, 2022
4969f46
apply `repeat_interleave` fix for `mps` to stable diffusion image2ima…
jncasey Nov 9, 2022
af27943
Flax tests: don't hardcode number of devices (#1175)
pcuenca Nov 9, 2022
13f388e
Improve documentation for the LPW pipeline (#1182)
exo-pla-net Nov 9, 2022
3d98dc7
Factor out encode text with Copied from (#1224)
patrickvonplaten Nov 9, 2022
7d0c272
Match the generator device to the pipeline for DDPM and DDIM (#1222)
anton-l Nov 9, 2022
187de44
Fix device on save/load tests
patrickvonplaten Nov 9, 2022
0feb21a
[Tests] Fix mps+generator fast tests (#1230)
anton-l Nov 9, 2022
2e980ac
[Tests] Adjust TPU test values (#1233)
anton-l Nov 9, 2022
a09d475
Add a reference to the name 'Sampler' (#1172)
apolinario Nov 10, 2022
045157a
Fix Flax usage comments (#1211)
pcuenca Nov 10, 2022
8171566
[Docs] improve img2img example (#1193)
ruanrz Nov 11, 2022
4c660d1
[Stable Diffusion] Fix padding / truncation (#1226)
patrickvonplaten Nov 13, 2022
b3c5e08
Finalize stable diffusion refactor (#1269)
patrickvonplaten Nov 13, 2022
33d7e89
Edited attention.py for older xformers (#1270)
Lime-Cakes Nov 14, 2022
c9b3463
Fix wrong link in text2img fine-tuning documentation (#1282)
daspartho Nov 14, 2022
ec7c8d3
add conversion script for vae
patrickvonplaten Nov 14, 2022
e4ffadc
Merge branch 'main' of https://github.com/huggingface/diffusers into …
patrickvonplaten Nov 14, 2022
a8d0977
[StableDiffusionInpaintPipeline] fix batch_size for mask and masked l…
patil-suraj Nov 14, 2022
d5ab55e
Merge branch 'main' of https://github.com/huggingface/diffusers into …
patrickvonplaten Nov 14, 2022
7c5fef8
Add UNet 1d for RL model for planning + colab (#105)
Nov 14, 2022
57525bb
Fix documentation typo for `UNet2DModel` and `UNet2DConditionModel` (…
xenova Nov 14, 2022
07f9e56
add source link to composable diffusion model (#1293)
nanliu1 Nov 15, 2022
610e2a6
Fix incorrect link to Stable Diffusion notebook (#1291)
dhruvrnaik Nov 15, 2022
db1cb0b
[dreambooth] link to bitsandbytes readme for installation (#1229)
0xdevalias Nov 15, 2022
a052019
Add Scheduler.from_pretrained and better scheduler changing (#1286)
patrickvonplaten Nov 15, 2022
554b374
Merge branch 'main' of https://github.com/huggingface/diffusers into …
patrickvonplaten Nov 15, 2022
4625f04
remove bogus files
patrickvonplaten Nov 15, 2022
8a73064
Add AltDiffusion (#1299)
patrickvonplaten Nov 15, 2022
af9ee87
Better error message for transformers dummy (#1306)
patrickvonplaten Nov 16, 2022
327ddc8
Revert "Update pr docs actions" (#1307)
Nov 16, 2022
46893ad
[AltDiffusion] add tests (#1311)
patil-suraj Nov 16, 2022
65d136e
Add improved handling of pil (#1309)
patrickvonplaten Nov 16, 2022
09d0546
cpu offloading: mutli GPU support (#1143)
dblunk88 Nov 16, 2022
f1fcfde
vq diffusion classifier free sampling (#1294)
williamberman Nov 16, 2022
aa5c4c2
doc string args shape fix (#1243)
kamalkraj Nov 16, 2022
afdd7bb
[Community Pipeline] CLIPSeg + StableDiffusionInpainting (#1250)
unography Nov 16, 2022
1138d63
Temporary local test for PIL_INTERPOLATION (#1317)
pcuenca Nov 16, 2022
245e9cc
fix make style
patrickvonplaten Nov 17, 2022
b3911f8
make fix copies
patrickvonplaten Nov 17, 2022
61719bf
Fix gpu_id (#1326)
anton-l Nov 17, 2022
3346ec3
integrate ort (#1110)
prathikr Nov 17, 2022
2dd12e3
make fix copies again
patrickvonplaten Nov 17, 2022
3fb28c4
xMerge branch 'main' of https://github.com/huggingface/diffusers
patrickvonplaten Nov 17, 2022
632dace
[Custom pipeline] Easier loading of local pipelines (#1327)
patrickvonplaten Nov 17, 2022
3b48620
Merge branch 'main' of https://github.com/huggingface/diffusers
patrickvonplaten Nov 17, 2022
e05ca84
[ONNX] Support Euler schedulers (#1328)
anton-l Nov 17, 2022
b21a463
rg Merge branch 'main' of https://github.com/huggingface/diffusers
patrickvonplaten Nov 17, 2022
63b3419
Fix typo
patrickvonplaten Nov 17, 2022
b9b7039
img2text Typo (#1329)
patrickvonplaten Nov 17, 2022
0cfbb51
add docs for multi-modal examples (#1227)
Nov 17, 2022
5dcef13
[Flax] Fix loading scheduler from subfolder (#1319)
skirsten Nov 18, 2022
fcfdd95
Fix/Enable all schedulers for in-painting (#1331)
patrickvonplaten Nov 18, 2022
195e437
Correct path to schedlure (#1322)
patrickvonplaten Nov 18, 2022
81fa2d6
Avoid nested fix-copies (#1332)
anton-l Nov 18, 2022
aa2ce41
Fix img2img speed with LMS-Discrete Scheduler (#896)
NotNANtoN Nov 18, 2022
7240318
Fix the order of casts for onnx inpainting (#1338)
anton-l Nov 18, 2022
3022090
Legacy Inpainting Pipeline for Onnx Models (#1237)
ctsims Nov 18, 2022
7bbbfbf
Jax infer support negative prompt (#1337)
entrpn Nov 19, 2022
44efcbd
Update README.md: IMAGIC example code snippet misspelling (#1346)
ki-arie Nov 20, 2022
eb2425b
Update README.md: Minor change to Imagic code snippet, missing dir er…
ki-arie Nov 20, 2022
3bec90f
Handle batches and Tensors in `pipeline_stable_diffusion_inpaint.py:p…
vict0rsch Nov 20, 2022
2b31740
Merge branch 'main' of https://github.com/huggingface/diffusers
patrickvonplaten Nov 20, 2022
ab1f01e
make style
patrickvonplaten Nov 20, 2022
94b27fb
change the sample model (#1352)
shunxing1234 Nov 21, 2022
78a6eed
Add bit diffusion [WIP] (#971)
kingstut Nov 21, 2022
ad93593
perf: prefer batched matmuls for attention (#1203)
Birch-san Nov 21, 2022
182eb95
[Community Pipelines] K-Diffusion Pipeline (#1360)
patrickvonplaten Nov 21, 2022
e50c25d
Add Safe Stable Diffusion Pipeline (#1244)
manuelbrack Nov 22, 2022
8b84f85
[examples] fix mixed_precision arg (#1359)
patil-suraj Nov 22, 2022
2d6d4ed
use memory_efficient_attention by default (#1354)
patil-suraj Nov 22, 2022
44e56de
Replace logger.warn by logger.warning (#1366)
regisss Nov 22, 2022
8fd3a74
Fix using non-square images with UNet2DModel and DDIM/DDPM pipelines …
jenkspt Nov 23, 2022
9e234d8
handle fp16 in `UNet2DModel` (#1216)
patil-suraj Nov 23, 2022
0eb507f
StableDiffusionImageVariationPipeline (#1365)
patil-suraj Nov 23, 2022
2625fb5
[Versatile Diffusion] Add versatile diffusion model (#1283)
patrickvonplaten Nov 23, 2022
16a32c9
Release: v0.8.0
anton-l Nov 23, 2022
f07a16e
update unet2d (#1376)
patil-suraj Nov 23, 2022
1524122
[Transformer2DModel] don't norm twice (#1381)
patil-suraj Nov 23, 2022
35d8186
[Bad dependencies] Fix imports (#1382)
patrickvonplaten Nov 23, 2022
9479052
fix trailing . dep object
patrickvonplaten Nov 23, 2022
9f47638
trailing . fix
patrickvonplaten Nov 23, 2022
30f6f44
add v prediction (#1386)
patil-suraj Nov 24, 2022
cecdd8b
Adapt UNet2D for supre-resolution (#1385)
patil-suraj Nov 24, 2022
81d8f4a
Version 0.9.0.dev0 (#1394)
anton-l Nov 24, 2022
e0e86b7
Make height and width optional (#1401)
patrickvonplaten Nov 24, 2022
cbfed0c
[Config] Add optional arguments (#1395)
patrickvonplaten Nov 24, 2022
05a36d5
Upscaling fixed (#1402)
patrickvonplaten Nov 24, 2022
bb2c64a
Add the new SD2 attention params to the VD text unet (#1400)
anton-l Nov 24, 2022
8e2c4cd
Deprecate sample size (#1406)
patrickvonplaten Nov 24, 2022
d50e321
Support SD2 attention slicing (#1397)
anton-l Nov 24, 2022
5c10e68
Add SD2 inpainting integration tests (#1412)
anton-l Nov 25, 2022
9f10c54
Fix sample size conversion script (#1408)
patrickvonplaten Nov 25, 2022
f26cde3
fix clip guided (#1414)
patrickvonplaten Nov 25, 2022
2902109
Fix all stable diffusion (#1415)
patrickvonplaten Nov 25, 2022
2c6bc0f
small fix
patrickvonplaten Nov 25, 2022
35099b2
[Versatile Diffusion] Fix remaining tests (#1418)
patrickvonplaten Nov 25, 2022
babfb8a
[MPS] call contiguous after permute (#1411)
kashif Nov 25, 2022
d52388f
Deprecate `predict_epsilon` (#1393)
pcuenca Nov 25, 2022
86aa747
Fix ONNX conversion and inference (#1416)
anton-l Nov 25, 2022
8faa822
Allow to set config params directly in init (#1419)
patrickvonplaten Nov 25, 2022
02aa4ef
Add tests for Stable Diffusion 2 V-prediction 768x768 (#1420)
anton-l Nov 25, 2022
9ec5084
StableDiffusionUpscalePipeline (#1396)
patil-suraj Nov 25, 2022
520bb08
fixes tests
patrickvonplaten Nov 25, 2022
7684518
Merge branch 'main' of https://github.com/huggingface/diffusers into …
patrickvonplaten Nov 25, 2022
b9e921f
added initial v-pred support to DPM-solver (#1421)
kashif Nov 25, 2022
6883294
SD2 docs (#1424)
patrickvonplaten Nov 25, 2022
462a79d
[Docs] fixed some typos (#1425)
kashif Nov 25, 2022
6b02323
Release: v0.9.0
anton-l Nov 25, 2022
6bb7749
Merge tag 'v0.9.0' into sync/hf_diffusers/0.9
xzyaoi Dec 1, 2022
b73e8b5
sync
xzyaoi Dec 1, 2022
7f12ed8
minor
xzyaoi Dec 1, 2022
File filter

Filter by extension

Filter by extension


Conversations
Failed to load comments.
Loading
Jump to
Jump to file
Failed to load files.
Loading
Diff view
Diff view
5 changes: 1 addition & 4 deletions .github/ISSUE_TEMPLATE/config.yml
Original file line number Diff line number Diff line change
@@ -1,7 +1,4 @@
contact_links:
- name: Forum
url: https://discuss.huggingface.co/c/discussion-related-to-httpsgithubcomhuggingfacediffusers/63
about: General usage questions and community discussions
- name: Blank issue
url: https://github.com/huggingface/diffusers/issues/new
about: Please note that the Forum is in most places the right place for discussions
about: General usage questions and community discussions
50 changes: 50 additions & 0 deletions .github/workflows/build_docker_images.yml
Original file line number Diff line number Diff line change
@@ -0,0 +1,50 @@
name: Build Docker images (nightly)

on:
workflow_dispatch:
schedule:
- cron: "0 0 * * *" # every day at midnight

concurrency:
group: docker-image-builds
cancel-in-progress: false

env:
REGISTRY: diffusers

jobs:
build-docker-images:
runs-on: ubuntu-latest

permissions:
contents: read
packages: write

strategy:
fail-fast: false
matrix:
image-name:
- diffusers-pytorch-cpu
- diffusers-pytorch-cuda
- diffusers-flax-cpu
- diffusers-flax-tpu
- diffusers-onnxruntime-cpu
- diffusers-onnxruntime-cuda

steps:
- name: Checkout repository
uses: actions/checkout@v3

- name: Login to Docker Hub
uses: docker/login-action@v2
with:
username: ${{ env.REGISTRY }}
password: ${{ secrets.DOCKERHUB_TOKEN }}

- name: Build and push
uses: docker/build-push-action@v3
with:
no-cache: true
context: ./docker/${{ matrix.image-name }}
push: true
tags: ${{ env.REGISTRY }}/${{ matrix.image-name }}:latest
17 changes: 17 additions & 0 deletions .github/workflows/pr_quality.yml
Original file line number Diff line number Diff line change
Expand Up @@ -31,3 +31,20 @@ jobs:
isort --check-only examples tests src utils scripts
flake8 examples tests src utils scripts
doc-builder style src/diffusers docs/source --max_len 119 --check_only --path_to_docs docs/source

check_repository_consistency:
runs-on: ubuntu-latest
steps:
- uses: actions/checkout@v3
- name: Set up Python
uses: actions/setup-python@v4
with:
python-version: "3.7"
- name: Install dependencies
run: |
python -m pip install --upgrade pip
pip install .[quality]
- name: Check quality
run: |
python utils/check_copies.py
python utils/check_dummies.py
81 changes: 65 additions & 16 deletions .github/workflows/pr_tests.yml
Original file line number Diff line number Diff line change
Expand Up @@ -10,19 +10,46 @@ concurrency:
cancel-in-progress: true

env:
OMP_NUM_THREADS: 8
MKL_NUM_THREADS: 8
DIFFUSERS_IS_CI: yes
OMP_NUM_THREADS: 4
MKL_NUM_THREADS: 4
PYTEST_TIMEOUT: 60
MPS_TORCH_VERSION: 1.13.0

jobs:
run_tests_cpu:
name: CPU tests on Ubuntu
runs-on: [ self-hosted, docker-gpu ]
run_fast_tests:
strategy:
fail-fast: false
matrix:
config:
- name: Fast PyTorch CPU tests on Ubuntu
framework: pytorch
runner: docker-cpu
image: diffusers/diffusers-pytorch-cpu
report: torch_cpu
- name: Fast Flax CPU tests on Ubuntu
framework: flax
runner: docker-cpu
image: diffusers/diffusers-flax-cpu
report: flax_cpu
- name: Fast ONNXRuntime CPU tests on Ubuntu
framework: onnxruntime
runner: docker-cpu
image: diffusers/diffusers-onnxruntime-cpu
report: onnx_cpu

name: ${{ matrix.config.name }}

runs-on: ${{ matrix.config.runner }}

container:
image: python:3.7
image: ${{ matrix.config.image }}
options: --shm-size "16gb" --ipc host -v /mnt/hf_cache:/mnt/cache/

defaults:
run:
shell: bash

steps:
- name: Checkout diffusers
uses: actions/checkout@v3
Expand All @@ -31,31 +58,51 @@ jobs:

- name: Install dependencies
run: |
python -m pip install --upgrade pip
python -m pip install torch --extra-index-url https://download.pytorch.org/whl/cpu
python -m pip install -e .[quality,test]
python -m pip install git+https://github.com/huggingface/accelerate
python -m pip install -U git+https://github.com/huggingface/transformers

- name: Environment
run: |
python utils/print_env.py

- name: Run all fast tests on CPU
- name: Run fast PyTorch CPU tests
if: ${{ matrix.config.framework == 'pytorch' }}
run: |
python -m pytest -n 2 --max-worker-restart=0 --dist=loadfile \
-s -v -k "not Flax and not Onnx" \
--make-reports=tests_${{ matrix.config.report }} \
tests/

- name: Run fast Flax TPU tests
if: ${{ matrix.config.framework == 'flax' }}
run: |
python -m pytest -n 2 --max-worker-restart=0 --dist=loadfile \
-s -v -k "Flax" \
--make-reports=tests_${{ matrix.config.report }} \
tests/

- name: Run fast ONNXRuntime CPU tests
if: ${{ matrix.config.framework == 'onnxruntime' }}
run: |
python -m pytest -n 2 --max-worker-restart=0 --dist=loadfile -s -v --make-reports=tests_torch_cpu tests/
python -m pytest -n 2 --max-worker-restart=0 --dist=loadfile \
-s -v -k "Onnx" \
--make-reports=tests_${{ matrix.config.report }} \
tests/

- name: Failure short reports
if: ${{ failure() }}
run: cat reports/tests_torch_cpu_failures_short.txt
run: cat reports/tests_${{ matrix.config.report }}_failures_short.txt

- name: Test suite reports artifacts
if: ${{ always() }}
uses: actions/upload-artifact@v2
with:
name: pr_torch_cpu_test_reports
name: pr_${{ matrix.config.report }}_test_reports
path: reports

run_tests_apple_m1:
name: MPS tests on Apple M1
run_fast_tests_apple_m1:
name: Fast PyTorch MPS tests on MacOS
runs-on: [ self-hosted, apple-m1 ]

steps:
Expand All @@ -80,16 +127,18 @@ jobs:
${CONDA_RUN} python -m pip install --upgrade pip
${CONDA_RUN} python -m pip install -e .[quality,test]
${CONDA_RUN} python -m pip install --pre torch==${MPS_TORCH_VERSION} --extra-index-url https://download.pytorch.org/whl/test/cpu
${CONDA_RUN} python -m pip install git+https://github.com/huggingface/accelerate
${CONDA_RUN} python -m pip install -U git+https://github.com/huggingface/transformers

- name: Environment
shell: arch -arch arm64 bash {0}
run: |
${CONDA_RUN} python utils/print_env.py

- name: Run all fast tests on MPS
- name: Run fast PyTorch tests on M1 (MPS)
shell: arch -arch arm64 bash {0}
run: |
${CONDA_RUN} python -m pytest -n 4 --max-worker-restart=0 --dist=loadfile -s -v --make-reports=tests_torch_mps tests/
${CONDA_RUN} python -m pytest -n 0 -s -v --make-reports=tests_torch_mps tests/

- name: Failure short reports
if: ${{ failure() }}
Expand Down
94 changes: 72 additions & 22 deletions .github/workflows/push_tests.yml
Original file line number Diff line number Diff line change
Expand Up @@ -6,19 +6,46 @@ on:
- main

env:
DIFFUSERS_IS_CI: yes
HF_HOME: /mnt/cache
OMP_NUM_THREADS: 8
MKL_NUM_THREADS: 8
PYTEST_TIMEOUT: 1000
RUN_SLOW: yes

jobs:
run_tests_single_gpu:
name: Diffusers tests
runs-on: [ self-hosted, docker-gpu, single-gpu ]
run_slow_tests:
strategy:
fail-fast: false
matrix:
config:
- name: Slow PyTorch CUDA tests on Ubuntu
framework: pytorch
runner: docker-gpu
image: diffusers/diffusers-pytorch-cuda
report: torch_cuda
- name: Slow Flax TPU tests on Ubuntu
framework: flax
runner: docker-tpu
image: diffusers/diffusers-flax-tpu
report: flax_tpu
- name: Slow ONNXRuntime CUDA tests on Ubuntu
framework: onnxruntime
runner: docker-gpu
image: diffusers/diffusers-onnxruntime-cuda
report: onnx_cuda

name: ${{ matrix.config.name }}

runs-on: ${{ matrix.config.runner }}

container:
image: nvcr.io/nvidia/pytorch:22.07-py3
options: --gpus 0 --shm-size "16gb" --ipc host -v /mnt/hf_cache:/mnt/cache
image: ${{ matrix.config.image }}
options: --shm-size "16gb" --ipc host -v /mnt/hf_cache:/mnt/cache/ ${{ matrix.config.runner == 'docker-tpu' && '--privileged' || '--gpus 0'}}

defaults:
run:
shell: bash

steps:
- name: Checkout diffusers
Expand All @@ -27,45 +54,69 @@ jobs:
fetch-depth: 2

- name: NVIDIA-SMI
if : ${{ matrix.config.runner == 'docker-gpu' }}
run: |
nvidia-smi

- name: Install dependencies
run: |
python -m pip install --upgrade pip
python -m pip uninstall -y torch torchvision torchtext
python -m pip install torch --extra-index-url https://download.pytorch.org/whl/cu116
python -m pip install -e .[quality,test]
python -m pip install git+https://github.com/huggingface/accelerate
python -m pip install -U git+https://github.com/huggingface/transformers

- name: Environment
run: |
python utils/print_env.py

- name: Run all (incl. slow) tests on GPU
- name: Run slow PyTorch CUDA tests
if: ${{ matrix.config.framework == 'pytorch' }}
env:
HUGGING_FACE_HUB_TOKEN: ${{ secrets.HUGGING_FACE_HUB_TOKEN }}
run: |
python -m pytest -n 1 --max-worker-restart=0 --dist=loadfile \
-s -v -k "not Flax and not Onnx" \
--make-reports=tests_${{ matrix.config.report }} \
tests/

- name: Run slow Flax TPU tests
if: ${{ matrix.config.framework == 'flax' }}
env:
HUGGING_FACE_HUB_TOKEN: ${{ secrets.HUGGING_FACE_HUB_TOKEN }}
run: |
python -m pytest -n 0 \
-s -v -k "Flax" \
--make-reports=tests_${{ matrix.config.report }} \
tests/

- name: Run slow ONNXRuntime CUDA tests
if: ${{ matrix.config.framework == 'onnxruntime' }}
env:
HUGGING_FACE_HUB_TOKEN: ${{ secrets.HUGGING_FACE_HUB_TOKEN }}
run: |
python -m pytest -n 1 --max-worker-restart=0 --dist=loadfile -s -v --make-reports=tests_torch_gpu tests/
python -m pytest -n 1 --max-worker-restart=0 --dist=loadfile \
-s -v -k "Onnx" \
--make-reports=tests_${{ matrix.config.report }} \
tests/

- name: Failure short reports
if: ${{ failure() }}
run: cat reports/tests_torch_gpu_failures_short.txt
run: cat reports/tests_${{ matrix.config.report }}_failures_short.txt

- name: Test suite reports artifacts
if: ${{ always() }}
uses: actions/upload-artifact@v2
with:
name: torch_test_reports
name: ${{ matrix.config.report }}_test_reports
path: reports

run_examples_tests:
name: Examples PyTorch CUDA tests on Ubuntu

runs-on: docker-gpu

run_examples_single_gpu:
name: Examples tests
runs-on: [ self-hosted, docker-gpu, single-gpu ]
container:
image: nvcr.io/nvidia/pytorch:22.07-py3
options: --gpus 0 --shm-size "16gb" --ipc host -v /mnt/hf_cache:/mnt/cache
image: diffusers/diffusers-pytorch-cuda
options: --gpus 0 --shm-size "16gb" --ipc host -v /mnt/hf_cache:/mnt/cache/

steps:
- name: Checkout diffusers
Expand All @@ -79,10 +130,9 @@ jobs:

- name: Install dependencies
run: |
python -m pip install --upgrade pip
python -m pip uninstall -y torch torchvision torchtext
python -m pip install torch --extra-index-url https://download.pytorch.org/whl/cu116
python -m pip install -e .[quality,test,training]
python -m pip install git+https://github.com/huggingface/accelerate
python -m pip install -U git+https://github.com/huggingface/transformers

- name: Environment
run: |
Expand All @@ -92,11 +142,11 @@ jobs:
env:
HUGGING_FACE_HUB_TOKEN: ${{ secrets.HUGGING_FACE_HUB_TOKEN }}
run: |
python -m pytest -n 1 --max-worker-restart=0 --dist=loadfile -s -v --make-reports=examples_torch_gpu examples/
python -m pytest -n 1 --max-worker-restart=0 --dist=loadfile -s -v --make-reports=examples_torch_cuda examples/

- name: Failure short reports
if: ${{ failure() }}
run: cat reports/examples_torch_gpu_failures_short.txt
run: cat reports/examples_torch_cuda_failures_short.txt

- name: Test suite reports artifacts
if: ${{ always() }}
Expand Down
4 changes: 3 additions & 1 deletion .gitignore
Original file line number Diff line number Diff line change
Expand Up @@ -163,4 +163,6 @@ tags
*.lock

# DS_Store (MacOS)
.DS_Store
.DS_Store
# RL pipelines may produce mp4 outputs
*.mp4
1 change: 1 addition & 0 deletions Makefile
Original file line number Diff line number Diff line change
Expand Up @@ -67,6 +67,7 @@ fixup: modified_only_fixup extra_style_checks autogenerate_code repo-consistency
# Make marked copies of snippets of codes conform to the original

fix-copies:
python utils/check_copies.py --fix_and_overwrite
python utils/check_dummies.py --fix_and_overwrite

# Run tests for the library
Expand Down
Loading