You signed in with another tab or window. Reload to refresh your session.You signed out in another tab or window. Reload to refresh your session.You switched accounts on another tab or window. Reload to refresh your session.Dismiss alert
{{ message }}
This repository has been archived by the owner on Nov 17, 2023. It is now read-only.
is there any experience/magic solutions when using mxnet to do distribute training?
say, is there any method, force the worker communicates with the parameter server on the same node to save the bandwidth? or something like ring-allreduce(horovod)?
The text was updated successfully, but these errors were encountered:
is there any experience/magic solutions when using mxnet to do distribute training?
say, is there any method, force the worker communicates with the parameter server on the same node to save the bandwidth? or something like ring-allreduce(horovod)?
The text was updated successfully, but these errors were encountered: