Skip to content

[BUG] Deepspeed does not update the model when using "Qwen/Qwen2.5-3B" and is fine with ""Qwen/Qwen2.5-1.%B"" #7077

@MiladInk

Description

@MiladInk

Describe the bug
I know this sounds very weird. However, when I use the deepspeed to optimize a "Qwen/Qwen2.5-3B" model, the model does not update at all. The same exact training code works with "Qwen/Qwen2.5-1.5B". Also checked and optimizing "meta-llama/Llama-3.2-3B" does not work. The parameters remain exactly the same. However, by just setting "torch_adam" to true, the issue goes away.

Metadata

Metadata

Assignees

Labels

bugSomething isn't workingtraining

Type

No type

Projects

No projects

Milestone

No milestone

Relationships

None yet

Development

No branches or pull requests

Issue actions