Skip to content

QuantizedFluxTransformer2DModel save bug #10379

@huangjun12

Description

@huangjun12

Describe the bug

QuantizedFluxTransformer2DModel save bug

Reproduction

    class QuantizedFluxTransformer2DModel(QuantizedDiffusersModel):
        base_class = FluxTransformer2DModel

    transformer = FluxTransformer2DModel.from_pretrained(
        'black-forest-labs/FLUX.1-Fill-dev', subfolder="transformer", torch_dtype=torch.bfloat16, 
    ).to("cuda")

    qtransformer = QuantizedFluxTransformer2DModel.quantize(transformer, weights=qfloat8) 

    # for param in qtransformer.parameters(): param.data = param.data.contiguous() # useless

    qtransformer.save_pretrained('fluxfill_transformer_fp8')

Logs

ValueError: You are trying to save a non contiguous tensor: `time_text_embed.timestep_embedder.linear_1.weight._data` which is not allowed. It either means you are trying to save tensors which are reference of each other in which case it's recommended to save only the full tensors, and reslice at load time, or simply call `.contiguous()` on your tensor to pack it before saving.

System Info

python==3.12
torch==2.4.0 + cu121
transformers==4.47.0
optimum-quanto==0.2.6
diffusers main from 12.23

Who can help?

No response

Metadata

Metadata

Assignees

No one assigned

    Labels

    bugSomething isn't workingstaleIssues that haven't received updates

    Type

    No type

    Projects

    No projects

    Milestone

    No milestone

    Relationships

    None yet

    Development

    No branches or pull requests

    Issue actions