You signed in with another tab or window. Reload to refresh your session.You signed out in another tab or window. Reload to refresh your session.You switched accounts on another tab or window. Reload to refresh your session.Dismiss alert
It would be difficult to do that without some parameter efficient finetuning methods such as LoRA. For the full 7B model, the parameters and optimizer states would take 84GB of memory, exceeding the 80GB of a single A100.
how to train llama2-7b in a100 80G gpu?
The text was updated successfully, but these errors were encountered: