-
Notifications
You must be signed in to change notification settings - Fork 282
Add flux example #2311
New issue
Have a question about this project? Sign up for a free GitHub account to open an issue and contact its maintainers and the community.
By clicking “Sign up for GitHub”, you agree to our terms of service and privacy statement. We’ll occasionally send you account related emails.
Already on GitHub? Sign in to your account
base: master
Are you sure you want to change the base?
Add flux example #2311
Conversation
Signed-off-by: Mengni Wang <mengni.wang@intel.com>
PR Reviewer Guide 🔍Here are some key observations to aid the review process:
|
PR Code Suggestions ✨ |
Signed-off-by: Mengni Wang <mengni.wang@intel.com>
examples/pytorch/diffusion_model/diffusers/flux/requirements.txt
Outdated
Show resolved
Hide resolved
There was a problem hiding this comment.
Choose a reason for hiding this comment
The reason will be displayed to describe this comment to others. Learn more.
Please add Flux into this table, https://github.com/intel/neural-compressor/tree/master/examples#quantization
Signed-off-by: Mengni Wang <mengni.wang@intel.com>
User description
Type of Change
example
Description
detail description
Expected Behavior & Potential Risk
the expected behavior that triggered by this PR
How has this PR been tested?
how to reproduce the test (including hardware information)
Dependency Change?
any library dependency introduced or removed
PR Type
Enhancement
Description
Added parameters for diffusion control in quantization
Updated initialization and conversion methods to include new parameters
Modified autoround quantize entry to handle new parameters
Diagram Walkthrough
File Walkthrough
autoround.py
Add diffusion parameters to autoround
neural_compressor/torch/algorithms/weight_only/autoround.py
guidance_scale
,num_inference_steps
,generator_seed
to__init__
convert
method to acceptpipeline
and use new parametersalgorithm_entry.py
Update autoround entry for diffusion
neural_compressor/torch/quantization/algorithm_entry.py
dataset
,guidance_scale
,num_inference_steps
,generator_seed
toautoround_quantize_entry
get_quantizer
call to include new parametersquantize.py
Allow additional keyword arguments in convert
neural_compressor/torch/quantization/quantize.py
**kwargs
toconvert
method