Skip to content

[Bug] Full Finetune: Tensors of floating point dtype can require gradients #2613

@charliedream1

Description

@charliedream1
  1. Did you update? pip install --upgrade unsloth unsloth_zoo
  2. Colab or Kaggle or local / cloud
  3. Number GPUs used, use nvidia-smi
  4. Which notebook?
  5. Paste Unsloth printout with 🦥 sloth emoji
  6. Which trainer? SFTTrainer, GRPOTrainer etc
  7. Minimal code to reproduce error Remove Hugging Face token!

For quick replies, got to https://discord.com/invite/unsloth.
Have you tried https://docs.unsloth.ai/basics/errors-troubleshooting

For full finetune, it gives out: RuntimeError: only Tensors of floating point dtype can require gradients

Model used: unsloth/Qwen3-0.6B-Base (I manually downloaded from the website)

Model loaded as below:

model, tokenizer = FastLanguageModel.from_pretrained(
    model_name = mdl_path, load_in_4bit = False,
    max_seq_length = max_seq_length,
    dtype = None,
    full_finetuning = True, # We have full finetuning now!
)

Metadata

Metadata

Assignees

Type

No type

Projects

No projects

Milestone

No milestone

Relationships

None yet

Development

No branches or pull requests

Issue actions