You signed in with another tab or window. Reload to refresh your session.You signed out in another tab or window. Reload to refresh your session.You switched accounts on another tab or window. Reload to refresh your session.Dismiss alert
The framework transparently starts distributed training so that users can use it with minimal effort and without worrying about the underlying details.
The text was updated successfully, but these errors were encountered:
馃殌 Feature
Simplify distributed training so that users do not have to manually setup graph store, sampler and kvstore which is inefficient for development and error prone.
Motivation
The only difference between distributed and non-distributed training in PyTorch-BigGraph is adding an command line argument "--rank rank" and a few more configs.
The training script automatically handle the two situations.
Euler requires knowing the hosts in the cluster. But the training script is also very concise to run distributed training.
Pitch
The framework transparently starts distributed training so that users can use it with minimal effort and without worrying about the underlying details.
The text was updated successfully, but these errors were encountered: