You signed in with another tab or window. Reload to refresh your session.You signed out in another tab or window. Reload to refresh your session.You switched accounts on another tab or window. Reload to refresh your session.Dismiss alert
I am sorry I directly put the question here and also I put it in stackoverflow: https://stackoverflow.com/questions/46901992/how-to-use-mix-link-multi-cpu-of-parallel-computing-in-chainer-v2-1-0
My paper submit deadline is 11.08, which is urgent situation. I am in hurry. so please forgive me to put question here.
In my research . I wrote 2 layers in neural network, the bottom first layer is RNN which runs on GPU, the top second layer is CPU(the algorithm model nature is more suited to CPU), I implemented it in CPU in chainer self-defined Link.
But, the CPU layer is slow , I can't wait for deadline of my paper submit. So I want to use parallel computing of this layer.
What is the best practice and fast way to implement parallel this link?
The text was updated successfully, but these errors were encountered:
Our naive communicator can be used for models reside in CPUs. It is not tested for models using both CPUs and GPUs, but I think it will work. Please consult with our tutorial and MNIST example. Thanks.
I am sorry I directly put the question here and also I put it in stackoverflow:
https://stackoverflow.com/questions/46901992/how-to-use-mix-link-multi-cpu-of-parallel-computing-in-chainer-v2-1-0
My paper submit deadline is 11.08, which is urgent situation. I am in hurry. so please forgive me to put question here.
In my research . I wrote 2 layers in neural network, the bottom first layer is RNN which runs on GPU, the top second layer is CPU(the algorithm model nature is more suited to CPU), I implemented it in CPU in chainer self-defined Link.
But, the CPU layer is slow , I can't wait for deadline of my paper submit. So I want to use parallel computing of this layer.
What is the best practice and fast way to implement parallel this link?
The text was updated successfully, but these errors were encountered: