-
Notifications
You must be signed in to change notification settings - Fork 296
New issue
Have a question about this project? Sign up for a free GitHub account to open an issue and contact its maintainers and the community.
By clicking “Sign up for GitHub”, you agree to our terms of service and privacy statement. We’ll occasionally send you account related emails.
Already on GitHub? Sign in to your account
Does the test accuracy need to be synchronized in distributed.py? #1
Comments
Nope, if u really need it, u can use |
Could you give a specific example? |
hm, m afraid that's not right.
But if you really need to synchronize the accuracy, I suggest this kind of implement or something else using
|
m not sure if u understand, if not u can directly use this code:
Btw, I don't think we really need to calculate the average accuracy during training, which is the waste of time. |
Thanks! |
No. That's DistributedDataParallel's job. |
thanks |
If directly output the test accuracy, will the code automatically synchronize the accuracy between each GPUs?
The text was updated successfully, but these errors were encountered: