We read every piece of feedback, and take your input very seriously.
To see all available qualifiers, see our documentation.
Have a question about this project? Sign up for a free GitHub account to open an issue and contact its maintainers and the community.
By clicking “Sign up for GitHub”, you agree to our terms of service and privacy statement. We’ll occasionally send you account related emails.
Already on GitHub? Sign in to your account
[127.0.0.1.10000::stderr] Traceback (most recent call last): [127.0.0.1.10000::stderr] File "/home/wrk/KungFu/examples/torch_elastic/torch_mnist_example.py", line 157, in [127.0.0.1.10000::stderr] main() [127.0.0.1.10000::stderr] File "/home/wrk/KungFu/examples/torch_elastic/torch_mnist_example.py", line 153, in main [127.0.0.1.10000::stderr] train(args, model, device, optimizer, step_based_schedule) [127.0.0.1.10000::stderr] File "/home/wrk/KungFu/examples/torch_elastic/torch_mnist_example.py", line 71, in train [127.0.0.1.10000::stderr] sync_model(model) [127.0.0.1.10000::stderr] File "/home/wrk/KungFu/examples/torch_elastic/torch_mnist_example.py", line 48, in sync_model [127.0.0.1.10000::stderr] kf.broadcast_parameters(model.state_dict()) [127.0.0.1.10000::stderr] File "/home/wrk/anaconda3/envs/py36tf13/lib/python3.6/site-packages/kungfu/torch/ops/collective.py", line 43, in broadcast_parameters [127.0.0.1.10000::stderr] h = inplace_broadcast_async_op(value, name) [127.0.0.1.10000::stderr] File "/home/wrk/anaconda3/envs/py36tf13/lib/python3.6/site-packages/kungfu/torch/ops/collective.py", line 29, in inplace_broadcast_async_op [127.0.0.1.10000::stderr] return broadcast_async_op_map[x.type()](x, x, x.type(), name) [127.0.0.1.10000::stderr] KeyError: 'torch.FloatTensor'
@lgarithm
The text was updated successfully, but these errors were encountered:
pytorch cpu is not supported
Sorry, something went wrong.
No branches or pull requests
[127.0.0.1.10000::stderr] Traceback (most recent call last):
[127.0.0.1.10000::stderr] File "/home/wrk/KungFu/examples/torch_elastic/torch_mnist_example.py", line 157, in
[127.0.0.1.10000::stderr] main()
[127.0.0.1.10000::stderr] File "/home/wrk/KungFu/examples/torch_elastic/torch_mnist_example.py", line 153, in main
[127.0.0.1.10000::stderr] train(args, model, device, optimizer, step_based_schedule)
[127.0.0.1.10000::stderr] File "/home/wrk/KungFu/examples/torch_elastic/torch_mnist_example.py", line 71, in train
[127.0.0.1.10000::stderr] sync_model(model)
[127.0.0.1.10000::stderr] File "/home/wrk/KungFu/examples/torch_elastic/torch_mnist_example.py", line 48, in sync_model
[127.0.0.1.10000::stderr] kf.broadcast_parameters(model.state_dict())
[127.0.0.1.10000::stderr] File "/home/wrk/anaconda3/envs/py36tf13/lib/python3.6/site-packages/kungfu/torch/ops/collective.py", line 43, in broadcast_parameters
[127.0.0.1.10000::stderr] h = inplace_broadcast_async_op(value, name)
[127.0.0.1.10000::stderr] File "/home/wrk/anaconda3/envs/py36tf13/lib/python3.6/site-packages/kungfu/torch/ops/collective.py", line 29, in inplace_broadcast_async_op
[127.0.0.1.10000::stderr] return broadcast_async_op_map[x.type()](x, x, x.type(), name)
[127.0.0.1.10000::stderr] KeyError: 'torch.FloatTensor'
@lgarithm
The text was updated successfully, but these errors were encountered: