-
-
Notifications
You must be signed in to change notification settings - Fork 204
New issue
Have a question about this project? Sign up for a free GitHub account to open an issue and contact its maintainers and the community.
By clicking “Sign up for GitHub”, you agree to our terms of service and privacy statement. We’ll occasionally send you account related emails.
Already on GitHub? Sign in to your account
DistributedDataParallel issue on .train() #23
Comments
Hey, I'll add a disclaimer to the README to make this more obvious, but currently ColBERT can only be trained on GPU, and doesn't support |
@bclavie Got it - thank you for the details! Apologies if a noob question, will use a different machine! |
No problem! And (not that there's anything wrong with noob questions!), it's really on me -- this should've been documented more clearly. |
I understand this may not be a RAGatouille issue - but I can't seem to get a simple training example to work. Relentlessly running into the following within
trainer.train(...)
:Any ideas? Apple M3 Max chip, here is the script im running using python 3.10
The text was updated successfully, but these errors were encountered: