- 
                Notifications
    
You must be signed in to change notification settings  - Fork 6.5k
 
Description
Describe the bug
In DiffusionPipeline.from_pretrained() (pipeline_utils.py) the use_safetensors option is only passed to the downloader but not to load_sub_model() which means if the safetensors are in the cached folder then they will be loaded.
Reproduction
from diffusers import StableDiffusionPipeline
model_id = "stabilityai/stable-diffusion-2"
StableDiffusionPipeline.from_pretrained( model_id ) # Will add the safetensors model to the cache
StableDiffusionPipeline.from_pretrained(
            model_id,
            use_safetensors=False, # Parameter ignored -> model.safetensors is still loaded
 )
TRANSFORMERS_VERBOSITY=debug python3 repro.py
loading weights file /home/anthonyb/.cache/huggingface/hub/models--stabilityai--stable-diffusion-2/snapshots/1e128c8891e52218b74cde8f26dbfc701cb99d79/text_encoder/model.safetensors
Changing the default value for use_safetensors from None to False in PreTrainedModel.from_pretrained() (in transformers/modeling_utils.py) to emulate what would happen if the value was propagated to the loader shows that the safetensors file is discarded and the model is loaded from bin:
loading weights file /home/anthonyb/.cache/huggingface/hub/models--stabilityai--stable-diffusion-2/snapshots/1e128c8891e52218b74cde8f26dbfc701cb99d79/text_encoder/pytorch_model.bin
Logs
System Info
Version: 0.30.3