New issue
Have a question about this project? Sign up for a free GitHub account to open an issue and contact its maintainers and the community.
By clicking “Sign up for GitHub”, you agree to our terms of service and privacy statement. We’ll occasionally send you account related emails.
Already on GitHub? Sign in to your account
Multiple GPU training failed #40
Comments
You did not set the launcher. You should add option |
I also found this solution in the doc. But I think this argument is not that clear for demonstrating distributed training. |
dist_train.sh actually set |
Thanks for your reply. mim is excellent! |
I tried to use 2 GPU for training, but it raised a error:
However, I did's manually set the distributed method in my own code. It seems that mim uses the train.py instead of dist_train.sh . How to fix this?
The text was updated successfully, but these errors were encountered: