Skip to content

qwix quantize WAN transformer#226

Merged
coolkp merged 2 commits intomainfrom
sanbao_wan
Aug 12, 2025
Merged

qwix quantize WAN transformer#226
coolkp merged 2 commits intomainfrom
sanbao_wan

Conversation

@susanbao
Copy link
Copy Markdown
Collaborator

Use qwix to quantize the WAN transformer

Add parameters:

  • use_qwix_quantization
  • quantization_calibration_method

When applying qwix quantization, please set parameters by: use_qwix_quantization=True quantization="fp8_full"

@github-actions
Copy link
Copy Markdown

Comment thread src/maxdiffusion/pipelines/wan/wan_pipeline.py Outdated
@coolkp
Copy link
Copy Markdown
Collaborator

coolkp commented Aug 11, 2025

Do we need requirements.txt update?

Comment thread src/maxdiffusion/pipelines/wan/wan_pipeline.py
Comment thread src/maxdiffusion/pipelines/wan/wan_pipeline.py
@entrpn
Copy link
Copy Markdown
Collaborator

entrpn commented Aug 11, 2025

Can we also add a unit test to verify the quantization logic. Doesn't need to load the full model.

@github-actions
Copy link
Copy Markdown

@susanbao
Copy link
Copy Markdown
Collaborator Author

Do we need requirements.txt update?

Yes. Just added it.

@susanbao
Copy link
Copy Markdown
Collaborator Author

Can we also add a unit test to verify the quantization logic. Doesn't need to load the full model.

Have added three unit test on qwix quantization logic.

Comment thread requirements_with_jax_ai_image.txt
@coolkp
Copy link
Copy Markdown
Collaborator

coolkp commented Aug 12, 2025

I manually tested on runner

@coolkp coolkp merged commit de60c6c into main Aug 12, 2025
1 of 3 checks passed
Sign up for free to join this conversation on GitHub. Already have an account? Sign in to comment

Labels

None yet

Projects

None yet

Development

Successfully merging this pull request may close these issues.

3 participants