New issue
Have a question about this project? Sign up for a free GitHub account to open an issue and contact its maintainers and the community.
By clicking “Sign up for GitHub”, you agree to our terms of service and privacy statement. We’ll occasionally send you account related emails.
Already on GitHub? Sign in to your account
[ distribution ] How to use multiple GPU on each replica ? #54
Comments
/cc @windreamer |
I'm now wondering if my idea above is wrong, that is, it is better that we use multiple workers controlling each GPU on the machine than we use one worker to control all the GPUs on that machine. |
Your idea is right. I modify codes which can training with each worker(replica) corresponding to one GPU, it works for this issue. |
@Stewarttzy So how much is the training speed improved ? And can you show us how you implement it, please ? |
I tried running 16 workers on 2 machines with K80 GPU, and 2 ps jobs, one for each machine. |
@heibaidaolx123 No, I find the same problem as you did. The more workers, slower the speed. @sguada I have seen you said at another issue that there will be a performance update for Tensorflow, when will it be released ? |
Did you find a solution to your question? |
Running 2 PS is a bad idea, since the variables are assigned in round-robin fashion, all the weights go to one PS while all the biases go to the other PS. When using PS make sure the load is balanced. You should be able to use either 1 PS or 3 PS for better balance. TF 0.9 release should increase the speed, we are working in the multi-gpu multi-replica case. |
@sguada I've just tried TF 0.9, and the training remained as slow as using TF 0.8. |
Is there any answer to this question?
How do you make this work? Suppose there are two machines, each with 4 GPUs, thus overall 8 GPUs. So do you mean you start 8 workers to employ all GPUs? Do you still need to explicitly assign different workers to different devices in your codes, e.g. using |
@AIROBOTAI Yes, I start 8 workers, and assign each workers to each device explicitly use |
@sguada What's the status of this issue? |
Here is my understanding of https://github.com/tensorflow/models/blob/master/slim/deployment/model_deploy.py Let's say you have 2 workers and 1 parameter server, and each worker has 4 clones (GPUs). The worker aggregates the clone gradients then sends them to the parameter server. Then the PS updates the weights. This is all good. The problem is the GPUs share the weights at PS, so each GPU fetches weights independently at the next forward pass. This generates a lot of traffic due to communications between every GPU and the PS. It would probably be faster to limit the connections at the level of worker and PS, such that individual GPU does not talk with PS directly. Once the weights reach a worker, they are distributed among clones internally. Distributed Caffe does exactly this kind of hierarchical broadcast. It saves quite a bit network bandwidth if you have multiple GPUs in a worker. |
@junshi15 I think you are right about internal weight distributing. Btw, it seems that Caffe does not have distributed version yet. |
@AIROBOTAI You are correct, the official BVLC Caffe does not extend beyond a single node. At the risk of self-promotion, I was referring to Yahoo version of it (https://github.com/yahoo/caffe/tree/master), being part of CaffeOnSpark (https://github.com/yahoo/CaffeOnSpark). Both ethernet and infiniband connections are supported. |
@junshi15 Thx for your clarificatioin! |
Hi @ZhuFengdaaa, I found your modified distributed_train.py (if this is how you use multi-gpu on each replica) and wrote a comment there. Since distributed TF only needs one |
@ZhuFengdaaa sorry, just realized that |
Hi @heibaidaolx123, I'd like to know if you have tested the speed benchmark using TF |
Hello, Is there an example ? |
Hi, does anyone know how to start 2 workers and each worker control 8 GPU, is there any example code to follow? |
@ppwwyyxx The benchmarks only perform "in-graph replication" across the GPUs in a single worker and asynchronous training across workers. |
@ZhuFengdaaa can you share your example code for "I start 8 workers, and assign each workers to each device explicitly use tf.device“ , it is exactly same problem i meet, thanks a lot |
Closing this issue. It's straightforward to use multiple GPUs in each replica; just build a graph which assigns work to all the GPUs. There are utilities to do this (used by the inception_train file pointed to in the top). Higher-level APIs to make this easier are being worked on. Please reopen if you think it's too soon to close this. |
hello, could you share me with your the code of inception_distribute_train.py. I also meet this problem. Thanks very much @ZhuFengdaaa |
The Code Here shows how to set each replica which has a single tower that uses one GPU. I'm wondering if there is a way changing this code a little bit to make use of multiple GPU on one machine like that example.
The way I currently used for using all GPU on a worker machine is starting the number of workers that equal to the number of GPUs. then the workers can communicate to each other as if they are not on one machine. That is slower than if I can start a woker that control more than one GPU.
The text was updated successfully, but these errors were encountered: