Skip to content
New issue

Have a question about this project? Sign up for a free GitHub account to open an issue and contact its maintainers and the community.

By clicking “Sign up for GitHub”, you agree to our terms of service and privacy statement. We’ll occasionally send you account related emails.

Already on GitHub? Sign in to your account

multi gpus question #573

Closed
JamesChenChina opened this issue Mar 12, 2019 · 4 comments
Closed

multi gpus question #573

JamesChenChina opened this issue Mar 12, 2019 · 4 comments

Comments

@JamesChenChina
Copy link

+-----------------------------------------------------------------------------+
| NVIDIA-SMI 384.130 Driver Version: 384.130 |
|-------------------------------+----------------------+----------------------+
| GPU Name Persistence-M| Bus-Id Disp.A | Volatile Uncorr. ECC |
| Fan Temp Perf Pwr:Usage/Cap| Memory-Usage | GPU-Util Compute M. |
|===============================+======================+======================|
| 0 Tesla V100-PCIE... Off | 00000000:2D:00.0 Off | 0 |
| N/A 65C P0 234W / 250W | 11290MiB / 16152MiB | 95% Default |
+-------------------------------+----------------------+----------------------+
| 1 Tesla V100-PCIE... Off | 00000000:31:00.0 Off | 0 |
| N/A 30C P0 24W / 250W | 10MiB / 16152MiB | 0% Default |
+-------------------------------+----------------------+----------------------+
| 2 Tesla V100-PCIE... Off | 00000000:35:00.0 Off | 0 |
| N/A 31C P0 24W / 250W | 10MiB / 16152MiB | 0% Default |
+-------------------------------+----------------------+----------------------+
| 3 Tesla V100-PCIE... Off | 00000000:39:00.0 Off | 0 |
| N/A 30C P0 23W / 250W | 10MiB / 16152MiB | 0% Default |
+-------------------------------+----------------------+----------------------+
| 4 Tesla V100-PCIE... Off | 00000000:A9:00.0 Off | 0 |
| N/A 31C P0 23W / 250W | 10MiB / 16152MiB | 0% Default |
+-------------------------------+----------------------+----------------------+
| 5 Tesla V100-PCIE... Off | 00000000:AD:00.0 Off | 0 |
| N/A 30C P0 24W / 250W | 10MiB / 16152MiB | 0% Default |
+-------------------------------+----------------------+----------------------+
| 6 Tesla V100-PCIE... Off | 00000000:B1:00.0 Off | 0 |
| N/A 31C P0 23W / 250W | 10MiB / 16152MiB | 0% Default |
+-------------------------------+----------------------+----------------------+
| 7 Tesla V100-PCIE... Off | 00000000:B5:00.0 Off | 0 |
| N/A 30C P0 23W / 250W | 10MiB / 16152MiB | 0% Default |
+-------------------------------+----------------------+----------------------+

+-----------------------------------------------------------------------------+
| Processes: GPU Memory |
| GPU PID Type Process name Usage |
|=============================================================================|
| 0 8694 C python3 11280MiB |
+-----------------------------------------------------------------------------+

i use python3 to run train.py with options "--gpu_ids 0,1,2,3,4,5,6,7 --load_size=512 --crop_size=480"
from result of nvidia-smi, It seems that only gpu0 works.
GPU-Util 95%

How can I make all gpus to calculate?

Many thanks

@ghost
Copy link

ghost commented Mar 16, 2019

same question

@ctmackay
Copy link

ctmackay commented Mar 23, 2019

had the same problem. i did not include any batch_size parameters, but after I added --batch_size 4, both GPUs started working at full speed.

+-----------------------------------------------------------------------------+
| NVIDIA-SMI 410.79       Driver Version: 410.79       CUDA Version: 10.0     |
|-------------------------------+----------------------+----------------------+
| GPU  Name        Persistence-M| Bus-Id        Disp.A | Volatile Uncorr. ECC |
| Fan  Temp  Perf  Pwr:Usage/Cap|         Memory-Usage | GPU-Util  Compute M. |
|===============================+======================+======================|
|   0  GeForce RTX 2070    Off  | 00000000:01:00.0  On |                  N/A |
| 66%   71C    P2   141W / 175W |   6723MiB /  7949MiB |     94%      Default |
+-------------------------------+----------------------+----------------------+
|   1  GeForce RTX 2070    Off  | 00000000:02:00.0 Off |                  N/A |
| 46%   59C    P2   140W / 175W |   5888MiB /  7952MiB |     96%      Default |
+-------------------------------+----------------------+----------------------+
                                                                               

@JamesChenChina
Copy link
Author

https://github.com/junyanz/pytorch-CycleGAN-and-pix2pix/blob/master/docs/qa.md
Multi-GPU Training (#327, #292, #137, #35)

@cena001plus
Copy link

cena001plus commented Aug 25, 2021

model inference, in test model, For the same problem, I have tried these methods and the problem has not been solved. What is the cause?

@JamesChenChina
@ghost
@ctmackay

Sign up for free to join this conversation on GitHub. Already have an account? Sign in to comment
Labels
None yet
Projects
None yet
Development

No branches or pull requests

3 participants