Skip to content

Does BFloat16 model training supported on ARC? #545

@XinyuYe-Intel

Description

@XinyuYe-Intel

Describe the issue

Hi, I tried to finetune Llama2 on ARC with BFloat16 data type and AdamW optimizer using transformers.Trainer, but I met RuntimeError: parameter in optimizer(Adamw) is not FP32, need check from intel_extension_for_pytorch/optim/_functional.py", line 1256, in adamw_step, so I want to confirm that does BFloat16 model training supported on ARC now? If not, could you please add this feature?
PS: I'm using dev/QLLM branch commit id d600370d44882ae8e15949452fe1d9f324cf6900

Metadata

Metadata

Assignees

Labels

Type

No type

Projects

No projects

Milestone

No milestone

Relationships

None yet

Development

No branches or pull requests

Issue actions