You signed in with another tab or window. Reload to refresh your session.You signed out in another tab or window. Reload to refresh your session.You switched accounts on another tab or window. Reload to refresh your session.Dismiss alert
It was implemented, but the PyTorch nn.DataParallel module that it uses is not very efficient for complicated RNN-heavy networks, so it didn't provide very good scaling. Soon, nn.DistributedDataParallel will be available (I think it's in master already) and could be a better choice (even for one machine).
At the end of the README under "Not yet implemented", it says "Multi-GPU". However it seems multi-gpu support was added: https://github.com/OpenNMT/OpenNMT-py/blob/master/train.py#L368
Is this correct?
The text was updated successfully, but these errors were encountered: