You signed in with another tab or window. Reload to refresh your session.You signed out in another tab or window. Reload to refresh your session.You switched accounts on another tab or window. Reload to refresh your session.Dismiss alert
As of now, the library only supports training with one GPU and this could be a limiting factor when training models with large databases. It would be nice to be able to perform distributed training on multiple GPUs.
Envisioned solution 馃挕 :
I am thinking of integrating FSDP or DDP to the library.
The text was updated successfully, but these errors were encountered:
As of now, the library only supports training with one GPU and this could be a limiting factor when training models with large databases. It would be nice to be able to perform distributed training on multiple GPUs.
Envisioned solution 馃挕 :
I am thinking of integrating
FSDP
orDDP
to the library.The text was updated successfully, but these errors were encountered: