-
Notifications
You must be signed in to change notification settings - Fork 124
drop support for python < 3.11 #1805
New issue
Have a question about this project? Sign up for a free GitHub account to open an issue and contact its maintainers and the community.
By clicking “Sign up for GitHub”, you agree to our terms of service and privacy statement. We’ll occasionally send you account related emails.
Already on GitHub? Sign in to your account
Merged
Merged
Conversation
This file contains hidden or bidirectional Unicode text that may be interpreted or compiled differently than what appears below. To review, open the file in an editor that reveals hidden Unicode characters.
Learn more about bidirectional Unicode characters
Qubitium
pushed a commit
that referenced
this pull request
Sep 17, 2025
* drop support for python < 3.11 * [CI] remove release actions for py < 3.11
Qubitium
added a commit
that referenced
this pull request
Sep 18, 2025
* add awq code Signed-off-by: ZX-ModelCloud <zx@modelcloud.ai> * add awq code Signed-off-by: ZX-ModelCloud <zx@modelcloud.ai> * config add "zero_point" field Signed-off-by: ZX-ModelCloud <zx@modelcloud.ai> * add awq kernels Signed-off-by: ZX-ModelCloud <zx@modelcloud.ai> * add AWQuantLinear Signed-off-by: ZX-ModelCloud <zx@modelcloud.ai> * add awq_processor.py Signed-off-by: ZX-ModelCloud <zx@modelcloud.ai> * loop_processor added pre_quantize(self, module: Module, device: torch.device) Signed-off-by: ZX-ModelCloud <zx@modelcloud.ai> * fix init_quant() Signed-off-by: ZX-ModelCloud <zx@modelcloud.ai> * cleanup Signed-off-by: ZX-ModelCloud <zx@modelcloud.ai> * Fixed the issue where _module_forward() was too slow to execute Signed-off-by: ZX-ModelCloud <zx@modelcloud.ai> * fix OOM Signed-off-by: ZX-ModelCloud <zx@modelcloud.ai> * fix save AWQ quantized model Signed-off-by: ZX-ModelCloud <zx@modelcloud.ai> * AWQProcessor add log stats Signed-off-by: ZX-ModelCloud <zx@modelcloud.ai> * AWQProcessor add calculate_w_wq_diff Signed-off-by: ZX-ModelCloud <zx@modelcloud.ai> * cleanup Signed-off-by: ZX-ModelCloud <zx@modelcloud.ai> * cleanup Signed-off-by: ZX-ModelCloud <zx@modelcloud.ai> * added awq code Signed-off-by: ZX-ModelCloud <zx@modelcloud.ai> * added AWQ format Signed-off-by: ZX-ModelCloud <zx@modelcloud.ai> * select_quant_linear() added "quant_method" argument Signed-off-by: ZX-ModelCloud <zx@modelcloud.ai> * added AWQuantLinear_EXLLAMA Signed-off-by: ZX-ModelCloud <zx@modelcloud.ai> * added AWQuantLinear_ExllamaV2 Signed-off-by: ZX-ModelCloud <zx@modelcloud.ai> * added AWQuantLinear_IPEX Signed-off-by: ZX-ModelCloud <zx@modelcloud.ai> * added AWQuantLinear_GEMV and AWQuantLinear_GEMVFast Signed-off-by: ZX-ModelCloud <zx@modelcloud.ai> * cleanup Signed-off-by: ZX-ModelCloud <zx@modelcloud.ai> * fix setup.py Signed-off-by: ZX-ModelCloud <zx@modelcloud.ai> * remove gptqmodel_ext/awq/exllama and exllamav2 Signed-off-by: ZX-ModelCloud <zx@modelcloud.ai> * add AWQuantLinear_Marlin Signed-off-by: ZX-ModelCloud <zx@modelcloud.ai> * Move AWQ's llama model definition Signed-off-by: ZX-ModelCloud <zx@modelcloud.ai> * remove hf transformer version check. always true. Signed-off-by: Qubitium <Qubitium@modelcloud.ai> * add comments Signed-off-by: Qubitium <Qubitium@modelcloud.ai> * add comments Signed-off-by: Qubitium <Qubitium@modelcloud.ai> * template for dynamic awq rules Signed-off-by: Qubitium <Qubitium@modelcloud.ai> * reset Signed-off-by: Qubitium <Qubitium@modelcloud.ai> * fix depth Signed-off-by: Qubitium <Qubitium@modelcloud.ai> * cleanup last_module Signed-off-by: Qubitium <Qubitium@modelcloud.ai> * fix last non-quantized module not stripped for ! Signed-off-by: Qubitium <Qubitium@modelcloud.ai> * allow non-quantized modules be part of a subset. for models that have executing but non-quantized modules within in the same subset Signed-off-by: Qubitium <Qubitium@modelcloud.ai> * fix get_layers_for_scaling() Signed-off-by: ZX-ModelCloud <zx@modelcloud.ai> * BaseGPTQModel add awq_get_modules_for_scaling() Signed-off-by: ZX-ModelCloud <zx@modelcloud.ai> * unify module delcaration with new tree Signed-off-by: Qubitium <Qubitium@modelcloud.ai> * fix wrong tree passed Signed-off-by: Qubitium <Qubitium@modelcloud.ai> * fix ! skipped Signed-off-by: Qubitium <Qubitium@modelcloud.ai> * fix awq_get_modules_for_scaling() Signed-off-by: ZX-ModelCloud <zx@modelcloud.ai> * comment out assert_awq_linear() Signed-off-by: ZX-ModelCloud <zx@modelcloud.ai> * If the Model uses GQA (Grouped Query Attention), attention out will be skipped. Signed-off-by: ZX-ModelCloud <zx@modelcloud.ai> * refractor and move dynamic layer modules code inside base. expose `simple_layer_modules()` and `full_layer_modules()` api Signed-off-by: Qubitium <Qubitium@modelcloud.ai> * fix: need to use classmethod for helpers Signed-off-by: Qubitium <Qubitium@modelcloud.ai> * refractor: moe module list creation Signed-off-by: Qubitium <Qubitium@modelcloud.ai> * refractor: moe module list creation Signed-off-by: Qubitium <Qubitium@modelcloud.ai> # Conflicts: # gptqmodel/models/base.py * mod qwen3_moe * use model_config * Fix missing parameter: fail_safe Signed-off-by: ZX-ModelCloud <zx@modelcloud.ai> * fix * cleanup * rename attention_out_module to shape_must_match_previous Signed-off-by: Qubitium <Qubitium@modelcloud.ai> * dedup: embed_modules. merge with base_modules Signed-off-by: Qubitium <Qubitium@modelcloud.ai> * fix moe * fix qwen3 moe * dedup: remove `layers_node` property. dynamic generate it from tree Signed-off-by: Qubitium <Qubitium@modelcloud.ai> * fix group * qwen3-moe support AWQ Signed-off-by: ZX-ModelCloud <zx@modelcloud.ai> * add filter_not_quantize_module() Signed-off-by: ZX-ModelCloud <zx@modelcloud.ai> * cleanup Signed-off-by: ZX-ModelCloud <zx@modelcloud.ai> * full_layer_modules() also needs to generate moe modules Signed-off-by: ZX-ModelCloud <zx@modelcloud.ai> * get the first layer to determine layer type * qwen3_moe declares "shape_must_match_previous" Signed-off-by: ZX-ModelCloud <zx@modelcloud.ai> * fix moe modules * rename BaseGPTQMOdel to BaseQModel Signed-off-by: Qubitium <Qubitium@modelcloud.ai> * rename model defs Signed-off-by: Qubitium <Qubitium@modelcloud.ai> * rename model defs Signed-off-by: Qubitium <Qubitium@modelcloud.ai> * remove static layer_type Signed-off-by: Qubitium <Qubitium@modelcloud.ai> * deprecate old api Signed-off-by: Qubitium <Qubitium@modelcloud.ai> * dynamically get base_modules Signed-off-by: Qubitium <Qubitium@modelcloud.ai> * use ugly long name for clearer meaning Signed-off-by: Qubitium <Qubitium@modelcloud.ai> * dedup llama defs Signed-off-by: Qubitium <Qubitium@modelcloud.ai> * missed prop removed but not from base Signed-off-by: Qubitium <Qubitium@modelcloud.ai> * build_moe_modules_if_need() add "is_awq_quantize" argument Signed-off-by: ZX-ModelCloud <zx@modelcloud.ai> * awq_get_modules_for_scaling() needs to skip the "mlp.gate" module Signed-off-by: ZX-ModelCloud <zx@modelcloud.ai> * fix error: module2inspect is None Signed-off-by: ZX-ModelCloud <zx@modelcloud.ai> * fix model load * Only the first node needs kwargs Signed-off-by: ZX-ModelCloud <zx@modelcloud.ai> * rename `torch_dtype` to `dtype` to sync with hf transformers (#1804) Signed-off-by: Qubitium <Qubitium@modelcloud.ai> * drop support for python < 3.11 (#1805) * drop support for python < 3.11 * [CI] remove release actions for py < 3.11 * hard deprecated ipex. Intel has deprecated ipex in favor of torch fused kernel for pytorch >= 2.8 (#1807) Signed-off-by: Qubitium <Qubitium@modelcloud.ai> # Conflicts: # gptqmodel/models/base.py # gptqmodel/models/loader.py # gptqmodel/utils/importer.py # gptqmodel/utils/model.py * clean Signed-off-by: Qubitium <Qubitium@modelcloud.ai> * rename Signed-off-by: Qubitium <Qubitium@modelcloud.ai> * rename Signed-off-by: Qubitium <Qubitium@modelcloud.ai> * fix group * update _layers_modules_tree * Fixed awq_get_modules_for_scaling() Error regarding "mlp.experts.{i}.down_proj" Signed-off-by: ZX-ModelCloud <zx@modelcloud.ai> * update * fix inp shape error Signed-off-by: ZX-ModelCloud <zx@modelcloud.ai> * update * fix * fix module shape error Signed-off-by: ZX-ModelCloud <zx@modelcloud.ai> * fix * fix * fix * clean * update * update * update * Adjust the order of q/k/v and gate/up Signed-off-by: ZX-ModelCloud <zx@modelcloud.ai> * fix Wrong quant_method Signed-off-by: ZX-ModelCloud <zx@modelcloud.ai> * add test_awq_moe.py Signed-off-by: ZX-ModelCloud <zx@modelcloud.ai> * cleanup * cleanup * fix deepseekv2/v3 * cleanup * fix layer0 * Fix FORMAT.GEMV Signed-off-by: ZX-ModelCloud <zx@modelcloud.ai> * fix norm * fix norm * rename * format * format * rename * Fix FORMAT.GEMV_FAST Signed-off-by: ZX-ModelCloud <zx@modelcloud.ai> * cleanup --------- Signed-off-by: ZX-ModelCloud <zx@modelcloud.ai> Signed-off-by: Qubitium <Qubitium@modelcloud.ai> Co-authored-by: Qubitium <Qubitium@modelcloud.ai> Co-authored-by: LRL-ModelCloud <lrl@lbx.dev> Co-authored-by: CSY-ModelCloud <csy@modelcloud.ai>
Sign up for free
to join this conversation on GitHub.
Already have an account?
Sign in to comment
Add this suggestion to a batch that can be applied as a single commit.
This suggestion is invalid because no changes were made to the code.
Suggestions cannot be applied while the pull request is closed.
Suggestions cannot be applied while viewing a subset of changes.
Only one suggestion per line can be applied in a batch.
Add this suggestion to a batch that can be applied as a single commit.
Applying suggestions on deleted lines is not supported.
You must change the existing code in this line in order to create a valid suggestion.
Outdated suggestions cannot be applied.
This suggestion has been applied or marked resolved.
Suggestions cannot be applied from pending reviews.
Suggestions cannot be applied on multi-line comments.
Suggestions cannot be applied while the pull request is queued to merge.
Suggestion cannot be applied right now. Please check back later.
@Qubitium