-
-
Notifications
You must be signed in to change notification settings - Fork 796
Closed
Description
Thanks for the great work on the optimizer quantization!
I'm trying to fine tune a T5 model using the adam 8 bit...but I'm finding the val loss is significantly worse (i.e.10x) vs using BFloat16 or FP32 optimizer states.
It does train stably in terms of steadily improving loss, but the starting loss is so far behind that it's not practical.
I'm wondering if we thus need to employ the stable embeddings for t5 and fine tuning...but if so, how do we do that without negatively affecting the already trained embeddings?
Or is the stable embeddings designed solely for the 'train from scratch' scenario and this high loss is due to other factors (i.e. t5 was trained in BFloat16 instead of FP32)?
Thanks for any insights!
Metadata
Metadata
Assignees
Labels
No labels