-
Notifications
You must be signed in to change notification settings - Fork 6.8k
Open
Labels
bugSomething isn't workingSomething isn't workingstaleIssues that haven't received updatesIssues that haven't received updates
Description
Describe the bug
QuantizedFluxTransformer2DModel save bug
Reproduction
class QuantizedFluxTransformer2DModel(QuantizedDiffusersModel):
base_class = FluxTransformer2DModel
transformer = FluxTransformer2DModel.from_pretrained(
'black-forest-labs/FLUX.1-Fill-dev', subfolder="transformer", torch_dtype=torch.bfloat16,
).to("cuda")
qtransformer = QuantizedFluxTransformer2DModel.quantize(transformer, weights=qfloat8)
# for param in qtransformer.parameters(): param.data = param.data.contiguous() # useless
qtransformer.save_pretrained('fluxfill_transformer_fp8')
Logs
ValueError: You are trying to save a non contiguous tensor: `time_text_embed.timestep_embedder.linear_1.weight._data` which is not allowed. It either means you are trying to save tensors which are reference of each other in which case it's recommended to save only the full tensors, and reslice at load time, or simply call `.contiguous()` on your tensor to pack it before saving.System Info
python==3.12
torch==2.4.0 + cu121
transformers==4.47.0
optimum-quanto==0.2.6
diffusers main from 12.23
Who can help?
No response
Reactions are currently unavailable
Metadata
Metadata
Assignees
Labels
bugSomething isn't workingSomething isn't workingstaleIssues that haven't received updatesIssues that haven't received updates