-
Notifications
You must be signed in to change notification settings - Fork 6.6k
Fix some little typos #17
New issue
Have a question about this project? Sign up for a free GitHub account to open an issue and contact its maintainers and the community.
By clicking “Sign up for GitHub”, you agree to our terms of service and privacy statement. We’ll occasionally send you account related emails.
Already on GitHub? Sign in to your account
Merged
Merged
Conversation
This file contains hidden or bidirectional Unicode text that may be interpreted or compiled differently than what appears below. To review, open the file in an editor that reveals hidden Unicode characters.
Learn more about bidirectional Unicode characters
patil-suraj
approved these changes
Jun 16, 2022
Contributor
patil-suraj
left a comment
There was a problem hiding this comment.
Choose a reason for hiding this comment
The reason will be displayed to describe this comment to others. Learn more.
Thanks for the fix!
Dango233
pushed a commit
to Dango233/diffusers
that referenced
this pull request
Dec 5, 2022
lawrence-cj
added a commit
to lawrence-cj/diffusers
that referenced
this pull request
Dec 23, 2024
* init vila caption * 111 (#2) * Feat/enze (#3) * 111 * 222 * test test * update vila caption * add vila code * update caption code * update vila stuff * update * gemma related * update * add time vae * update train * unrelated commit * code update * 1. add RMSNorm code; 2. add qk norm for cross attention; 3. add RMSNorm for y_embedder; 4. code update; 5. config update for y_norm; * tmp update for train.py * fix t5 loading * del unrelated files * tmp code for norm y & model * update * update * revert model structure(some unrelated nn.Identity Norm) * fix epoch_eta bug; (cherry picked from commit 48a2c16) * update * add gemma config * update * add ldm ae * update * add junyu vae * update * get_vae code * remove debug in train * add config.vae_latent_dim in train.py * commit * tqdm optimize in [infer] * update [infer] * update vae store code * update * update * update * update * update * update * add readme * update * re-add ldm_ae * [important] fix the `glumbonv` serious bug. Change `glumbonv` to `glumbconv`; * make the model structure code more robust. * update * update * update * update * update * update * update * 1 * set TOKENIZERS_PARALLELISM false * update * update * optimize cache log * add parallel linear attn * add parallel attn ref comments * update * update * update parallel attn * update * update * update text encoder system prompt * update * add sys prompt hashid * update * update * add test edit speed code * add torch.sync code * add inference for qat * add 2k config and fix dataset bug * update * update * push 4k config * add 4k timeshift=5 config * add feature: dilate conv * add flux sbatch test scripts; * update * update * tmp code * [CI-Lint] Fix code style issues with pre-commit 9fc4580380895194e461754b35cb9c904559e4e5 * clean code; mv slurm script into a folder; * [CI-Lint] Fix code style issues with pre-commit 9f1aeef955f2b1c23363fc7a00a9cef82bb6091f * bug fixed caused by merging enze's code; * mv unused model-block to other scripts; * [CI-Lint] Fix code style issues with pre-commit de3e66f6f8df2c056571387b2ad864e528bfc926 * mv unused model-block to other scripts; * code update; * code update; * [CI-Lint] Fix code style issues with pre-commit 5b2bac2e501cc6952f5c35fe4ce8fe1b98e6add8 --------- Co-authored-by: xieenze <Johnny_ez@163.com> Co-authored-by: GitHub Action <action@github.com>
yuyanpeng-google
pushed a commit
to yuyanpeng-google/diffusers
that referenced
this pull request
Oct 30, 2025
Add vae sharding
sayakpaul
added a commit
that referenced
this pull request
Nov 25, 2025
* add vae * Initial commit for Flux 2 Transformer implementation * add pipeline part * small edits to the pipeline and conversion * update conversion script * fix * up up * finish pipeline * Remove Flux IP Adapter logic for now * Remove deprecated 3D id logic * Remove ControlNet logic for now * Add link to ViT-22B paper as reference for parallel transformer blocks such as the Flux 2 single stream block * update pipeline * Don't use biases for input projs and output AdaNorm * up * Remove bias for double stream block text QKV projections * Add script to convert Flux 2 transformer to diffusers * make style and make quality * fix a few things. * allow sft files to go. * fix image processor * fix batch * style a bit * Fix some bugs in Flux 2 transformer implementation * Fix dummy input preparation and fix some test bugs * fix dtype casting in timestep guidance module. * resolve conflicts., * remove ip adapter stuff. * Fix Flux 2 transformer consistency test * Fix bug in Flux2TransformerBlock (double stream block) * Get remaining Flux 2 transformer tests passing * make style; make quality; make fix-copies * remove stuff. * fix type annotaton. * remove unneeded stuff from tests * tests * up * up * add sf support * Remove unused IP Adapter and ControlNet logic from transformer (#9) * copied from * Apply suggestions from code review Co-authored-by: YiYi Xu <yixu310@gmail.com> Co-authored-by: apolinário <joaopaulo.passos@gmail.com> * up * up * up * up * up * Refactor Flux2Attention into separate classes for double stream and single stream attention * Add _supports_qkv_fusion to AttentionModuleMixin to allow subclasses to disable QKV fusion * Have Flux2ParallelSelfAttention inherit from AttentionModuleMixin with _supports_qkv_fusion=False * Log debug message when calling fuse_projections on a AttentionModuleMixin subclass that does not support QKV fusion * Address review comments * Update src/diffusers/pipelines/flux2/pipeline_flux2.py Co-authored-by: YiYi Xu <yixu310@gmail.com> * up * Remove maybe_allow_in_graph decorators for Flux 2 transformer blocks (#12) * up * support ostris loras. (#13) * up * update schdule * up * up (#17) * add training scripts (#16) * add training scripts Co-authored-by: Linoy Tsaban <linoytsaban@gmail.com> * model cpu offload in validation. * add flux.2 readme * add img2img and tests * cpu offload in log validation * Apply suggestions from code review * fix * up * fixes * remove i2i training tests for now. --------- Co-authored-by: Linoy Tsaban <linoytsaban@gmail.com> Co-authored-by: linoytsaban <linoy@huggingface.co> * up --------- Co-authored-by: yiyixuxu <yixu310@gmail.com> Co-authored-by: Daniel Gu <dgu8957@gmail.com> Co-authored-by: yiyi@huggingface.co <yiyi@ip-10-53-87-203.ec2.internal> Co-authored-by: dg845 <58458699+dg845@users.noreply.github.com> Co-authored-by: Dhruv Nair <dhruv.nair@gmail.com> Co-authored-by: apolinário <joaopaulo.passos@gmail.com> Co-authored-by: yiyi@huggingface.co <yiyi@ip-26-0-160-103.ec2.internal> Co-authored-by: Linoy Tsaban <linoytsaban@gmail.com> Co-authored-by: linoytsaban <linoy@huggingface.co>
Sign up for free
to join this conversation on GitHub.
Already have an account?
Sign in to comment
Add this suggestion to a batch that can be applied as a single commit.
This suggestion is invalid because no changes were made to the code.
Suggestions cannot be applied while the pull request is closed.
Suggestions cannot be applied while viewing a subset of changes.
Only one suggestion per line can be applied in a batch.
Add this suggestion to a batch that can be applied as a single commit.
Applying suggestions on deleted lines is not supported.
You must change the existing code in this line in order to create a valid suggestion.
Outdated suggestions cannot be applied.
This suggestion has been applied or marked resolved.
Suggestions cannot be applied from pending reviews.
Suggestions cannot be applied on multi-line comments.
Suggestions cannot be applied while the pull request is queued to merge.
Suggestion cannot be applied right now. Please check back later.
No description provided.