Skip to content
New issue

Have a question about this project? Sign up for a free GitHub account to open an issue and contact its maintainers and the community.

By clicking “Sign up for GitHub”, you agree to our terms of service and privacy statement. We’ll occasionally send you account related emails.

Already on GitHub? Sign in to your account

[Experimental] Add experimental distributed SGD API #2858

Merged
merged 20 commits into from
Sep 20, 2018
Empty file.
627 changes: 627 additions & 0 deletions python/ray/experimental/sgd/modified_allreduce.py

Large diffs are not rendered by default.

496 changes: 496 additions & 0 deletions python/ray/experimental/sgd/sgd.py

Large diffs are not rendered by default.

29 changes: 29 additions & 0 deletions python/ray/experimental/sgd/test_sgd.py
Original file line number Diff line number Diff line change
@@ -0,0 +1,29 @@
from __future__ import absolute_import
from __future__ import division
from __future__ import print_function

import ray

import argparse
import numpy as np
import tensorflow as tf

from ray.experimental.sgd.tfbench.test_model import TFBenchModel
from ray.experimental.sgd.sgd import DistributedSGD

if __name__ == "__main__":
ray.init()

model_creator = (
lambda worker_idx, device_idx: TFBenchModel(batch=1, use_cpus=True))

sgd = DistributedSGD(
model_creator,
num_workers=2,
devices_per_worker=2,
use_cpus=True,
use_plasma_op=False)

for _ in range(100):
loss = sgd.step()
print("Current loss", loss)
Empty file.
Loading