Skip to content

Conversation

kylesayrs
Copy link
Contributor

@kylesayrs kylesayrs commented Sep 30, 2024

Purpose

  • Fix bug where quantization_config is present but quant_method is missing, causing HF transformers to fail to parse
FAILED tests/llmcompressor/transformers/finetune/test_oneshot_then_finetune.py::TestOneshotThenFinetune::test_oneshot_then_finetune - ValueError: The model's quantization config from the arguments has no `quant_method` attribute. Make sure that the model has been correctly quantized
malformed_config.json
{
  "_name_or_path": "/home/ksayers/.cache/huggingface/hub/models--Xenova--llama2.c-stories15M/snapshots/ccdd47c2dc554aeecd2bb4e713e1c988f206a296",
  "architectures": [
    "LlamaForCausalLM"
  ],
  "attention_bias": false,
  "attention_dropout": 0.0,
  "bos_token_id": 1,
  "eos_token_id": 2,
  "head_dim": 48,
  "hidden_act": "silu",
  "hidden_size": 288,
  "initializer_range": 0.02,
  "intermediate_size": 768,
  "max_position_embeddings": 256,
  "mlp_bias": false,
  "model_type": "llama",
  "num_attention_heads": 6,
  "num_hidden_layers": 6,
  "num_key_value_heads": 6,
  "pretraining_tp": 1,
  "quantization_config": {
    "version": "0.6.0.20240928"
  },
  "rms_norm_eps": 1e-05,
  "rope_scaling": null,
  "rope_theta": 10000.0,
  "tie_word_embeddings": false,
  "torch_dtype": "float32",
  "transformers_version": "4.45.0",
  "use_cache": true,
  "vocab_size": 32000
}
corrected_config.json
{
  "_name_or_path": "/home/ksayers/.cache/huggingface/hub/models--Xenova--llama2.c-stories15M/snapshots/ccdd47c2dc554aeecd2bb4e713e1c988f206a296",
  "architectures": [
    "LlamaForCausalLM"
  ],
  "attention_bias": false,
  "attention_dropout": 0.0,
  "bos_token_id": 1,
  "eos_token_id": 2,
  "head_dim": 48,
  "hidden_act": "silu",
  "hidden_size": 288,
  "initializer_range": 0.02,
  "intermediate_size": 768,
  "max_position_embeddings": 256,
  "mlp_bias": false,
  "model_type": "llama",
  "num_attention_heads": 6,
  "num_hidden_layers": 6,
  "num_key_value_heads": 6,
  "pretraining_tp": 1,
  "quantization_config": {
    "version": "0.6.0.20240928"
    "quant_method": "compressed-tensors"
  },
  "rms_norm_eps": 1e-05,
  "rope_scaling": null,
  "rope_theta": 10000.0,
  "tie_word_embeddings": false,
  "torch_dtype": "float32",
  "transformers_version": "4.45.0",
  "use_cache": true,
  "vocab_size": 32000
}

Changes

  • Always write metadata (version, quant_method) if either the quantization config or sparsity config is present
  • Handle logic to unwrap these metadata fields when parsing

Testing

  • tests/llmcompressor/transformers/finetune/test_oneshot_then_finetune.py no longer raises missing quant_method error

@mgoin mgoin merged commit 4a09744 into main Oct 1, 2024
1 check passed
@mgoin mgoin deleted the kylesayrs/require-quant_method branch October 1, 2024 17:36
Etelis added a commit to Etelis/compressed-tensors that referenced this pull request Sep 11, 2025
Co-authored-by: kylesayrs <kyle@neuralmagic.com>
Sign up for free to join this conversation on GitHub. Already have an account? Sign in to comment
Labels
None yet
Projects
None yet
Development

Successfully merging this pull request may close these issues.

4 participants