Skip to content
New issue

Have a question about this project? Sign up for a free GitHub account to open an issue and contact its maintainers and the community.

By clicking “Sign up for GitHub”, you agree to our terms of service and privacy statement. We’ll occasionally send you account related emails.

Already on GitHub? Sign in to your account

[ distribution ] How to use multiple GPU on each replica ? #2106

Closed
ZhuFengdaaa opened this issue Apr 26, 2016 · 1 comment
Closed

[ distribution ] How to use multiple GPU on each replica ? #2106

ZhuFengdaaa opened this issue Apr 26, 2016 · 1 comment

Comments

@ZhuFengdaaa
Copy link

The Code Here shows how to set each replica which has a single tower that uses one GPU. I'm wondering if there is a way changing this code a little bit to make use of multiple GPU on one machine like that example.

The way I currently used for using all GPU on a worker machine is starting the number of workers that equal to the number of GPUs. then the workers can communicate to each other as if they are not on one machine. That is slower than if I can start a woker that control more than one GPU.

@mrry
Copy link
Contributor

mrry commented Apr 26, 2016

Closing as a duplicate of tensorflow/models#54.

@mrry mrry closed this as completed Apr 26, 2016
fsx950223 pushed a commit to fsx950223/tensorflow that referenced this issue Dec 22, 2023
…upstream-sync-230515

Develop upstream sync 230515
Sign up for free to join this conversation on GitHub. Already have an account? Sign in to comment
Labels
None yet
Projects
None yet
Development

No branches or pull requests

2 participants