Skip to content

AutoencoderRAE loading error with older transformers #13279

@arijit-hub

Description

@arijit-hub

Describe the bug

Hye,

I am trying to use AutoencoderRAE, unfortunately I cannot load it and it seems to break by giving the following error:

File ".venv/lib/python3.13/site-packages/transformers/models/dinov2_with_registers/modeling_dinov2_with_registers.py", line 529, in _init_weights
    ).to(module.weight.dtype)
      ^^
AttributeError: 'NoneType' object has no attribute 'to'

I think its because of older tokenizers, transformers and huggingface-hub versions.

I am in the following versions for them:

transformers==4.55.4
tokenizers==0.21.2
huggingface-hub==0.36.2
diffusers==0.37.0

Updating them to the following fixes the problem:

transformers==5.3.0
tokenizers==0.22.2
huggingface-hub==1.7.1
diffusers==0.37.0

However, it breaks my metric script for calculating ImageReward! Is there a solution for this without updating packages?

Reproduction

The package versions must be:

transformers==4.55.4
tokenizers==0.21.2
huggingface-hub==0.36.2
diffusers==0.37.0

Then the simple loading fails:

from diffusers import AutoencoderRAE

model = AutoencoderRAE.from_pretrained(
    "nyu-visionx/RAE-dinov2-wReg-base-ViTXL-n08"
).to("cuda").eval()

Logs

File ".venv/lib/python3.13/site-packages/transformers/models/dinov2_with_registers/modeling_dinov2_with_registers.py", line 529, in _init_weights
    ).to(module.weight.dtype)
      ^^
AttributeError: 'NoneType' object has no attribute 'to'

System Info

  • 🤗 Diffusers version: 0.37.0
  • Platform: Linux-5.15.0-119-generic-x86_64-with-glibc2.31
  • Running on Google Colab?: No
  • Python version: 3.13.5
  • PyTorch version (GPU?): 2.8.0+cu128 (True)
  • Flax version (CPU?/GPU?/TPU?): not installed (NA)
  • Jax version: not installed
  • JaxLib version: not installed
  • Huggingface_hub version: 0.36.2
  • Transformers version: 4.55.4
  • Accelerate version: 1.13.0
  • PEFT version: not installed
  • Bitsandbytes version: not installed
  • Safetensors version: 0.7.0
  • xFormers version: not installed
  • Accelerator: NVIDIA RTX A6000, 49140 MiB
    NVIDIA RTX A6000, 49140 MiB
  • Using GPU in script?: Yes
  • Using distributed or parallel set-up in script?: No

Who can help?

@sayakpaul @kashif

Metadata

Metadata

Assignees

Labels

bugSomething isn't working

Type

No type

Projects

No projects

Milestone

No milestone

Relationships

None yet

Development

No branches or pull requests

Issue actions