Skip to content

transformers require torch >= 2.1.0 to run fp8 model, but im using 2.7.0 #38034

@O5-7

Description

@O5-7

System Info

python = 3.9
torch = 2.7.0+cu128
transformers = 4.51.3

Who can help?

No response

Information

  • The official example scripts
  • My own modified scripts

Tasks

  • An officially supported task in the examples folder (such as GLUE/SQuAD, ...)
  • My own task or dataset (give details below)

Reproduction

download Qwen3-1.7B-FP8
run quick start with local model

the result is :

  File "D:\anaconda3\lib\site-packages\transformers\models\auto\auto_factory.py", line 571, in from_pretrained
    return model_class.from_pretrained(
  File "D:\anaconda3\lib\site-packages\transformers\modeling_utils.py", line 279, in _wrapper
    return func(*args, **kwargs)
  File "D:\anaconda3\lib\site-packages\transformers\modeling_utils.py", line 4228, in from_pretrained
    hf_quantizer.validate_environment(
  File "D:\anaconda3\lib\site-packages\transformers\quantizers\quantizer_finegrained_fp8.py", line 36, in validate_environment
    raise ImportError(
ImportError: Using fp8 quantization requires torch >= 2.1.0Please install the latest version of torch ( pip install --upgrade torch )

Im using torch=2.7.0 , I will try lower version lately

Expected behavior

no error

Metadata

Metadata

Assignees

No one assigned

    Labels

    Type

    No type

    Projects

    No projects

    Milestone

    No milestone

    Relationships

    None yet

    Development

    No branches or pull requests

    Issue actions