FasterDiT
Popular repositories Loading
-
xDiT
xDiT PublicForked from xdit-project/xDiT
xDiT: A Scalable Inference Engine for Diffusion Transformers (DiTs) with Massive Parallelism
Python
-
onediff
onediff PublicForked from siliconflow/onediff
OneDiff: An out-of-the-box acceleration library for diffusion models.
Jupyter Notebook
-
ParaAttention
ParaAttention PublicForked from chengzeyi/ParaAttention
https://wavespeed.ai/ Context parallel attention that accelerates DiT model inference with dynamic caching
Python
-
SageAttention
SageAttention PublicForked from thu-ml/SageAttention
Quantized Attention that achieves speedups of 2.1-3.1x and 2.7-5.1x compared to FlashAttention2 and xformers, respectively, without lossing end-to-end metrics across various models.
Cuda
-
DiffSynth-Studio
DiffSynth-Studio PublicForked from modelscope/DiffSynth-Studio
Enjoy the magic of Diffusion models!
Python
-
ViDiT-Q
ViDiT-Q PublicForked from FasterProcess/ViDiT-Q
[ICLR'25] ViDiT-Q: Efficient and Accurate Quantization of Diffusion Transformers for Image and Video Generation
Python
Repositories
- PaddleMIX Public Forked from PaddlePaddle/PaddleMIX
Paddle Multimodal Integration and eXploration, supporting mainstream multi-modal tasks, including end-to-end large-scale multi-modal pretrain models and diffusion model toolbox. Equipped with high performance and flexibility.
FasterDiT/PaddleMIX’s past year of commit activity - ViDiT-Q Public Forked from FasterProcess/ViDiT-Q
[ICLR'25] ViDiT-Q: Efficient and Accurate Quantization of Diffusion Transformers for Image and Video Generation
FasterDiT/ViDiT-Q’s past year of commit activity - DiffSynth-Studio Public Forked from modelscope/DiffSynth-Studio
Enjoy the magic of Diffusion models!
FasterDiT/DiffSynth-Studio’s past year of commit activity - ParaAttention Public Forked from chengzeyi/ParaAttention
https://wavespeed.ai/ Context parallel attention that accelerates DiT model inference with dynamic caching
FasterDiT/ParaAttention’s past year of commit activity - SageAttention Public Forked from thu-ml/SageAttention
Quantized Attention that achieves speedups of 2.1-3.1x and 2.7-5.1x compared to FlashAttention2 and xformers, respectively, without lossing end-to-end metrics across various models.
FasterDiT/SageAttention’s past year of commit activity - xDiT Public Forked from xdit-project/xDiT
xDiT: A Scalable Inference Engine for Diffusion Transformers (DiTs) with Massive Parallelism
FasterDiT/xDiT’s past year of commit activity - onediff Public Forked from siliconflow/onediff
OneDiff: An out-of-the-box acceleration library for diffusion models.
FasterDiT/onediff’s past year of commit activity
People
This organization has no public members. You must be a member to see who’s a part of this organization.
Top languages
Loading…
Most used topics
Loading…