Skip to content
New issue

Have a question about this project? Sign up for a free GitHub account to open an issue and contact its maintainers and the community.

By clicking “Sign up for GitHub”, you agree to our terms of service and privacy statement. We’ll occasionally send you account related emails.

Already on GitHub? Sign in to your account

Question about the training speed #17

Closed
zhiweige opened this issue Apr 24, 2018 · 6 comments
Closed

Question about the training speed #17

zhiweige opened this issue Apr 24, 2018 · 6 comments

Comments

@zhiweige
Copy link

Thanks for your great work on inpaiinting! I am trying to train the inpainting network according to your guide on the README. The training seems okay.
However, the training is so slow. Below shows the training log:

|####################| 100.00%, 109326/0 sec. train epoch 1, iter 10000/10000, loss 0.089837, 0.09 batches/sec.
[2018-04-20 02:47:04 @logger.py:43] Trigger callback: Trigger ModelSaver: Save model to model_logs/20180418202435371480_node1_imagenet_NORMAL_wgan_gp_full_model_image_256/snap-10000.
|####################| 100.00%, 108851/0 sec. train epoch 2, iter 10000/10000, loss 0.030459, 0.09 batches/sec.
[2018-04-21 09:01:15 @logger.py:43] Trigger callback: Trigger ModelSaver: Save model to model_logs/20180418202435371480_node1_imagenet_NORMAL_wgan_gp_full_model_image_256/snap-20000.
|####################| 100.00%, 108510/0 sec. train epoch 3, iter 10000/10000, loss 0.030714, 0.09 batches/sec.
[2018-04-22 15:09:45 @logger.py:43] Trigger callback: Trigger ModelSaver: Save model to model_logs/20180418202435371480_node1_imagenet_NORMAL_wgan_gp_full_model_image_256/snap-30000.
|####################| 100.00%, 108276/0 sec. train epoch 4, iter 10000/10000, loss 0.038624, 0.09 batches/sec.
[2018-04-23 21:14:21 @logger.py:43] Trigger callback: Trigger ModelSaver: Save model to model_logs/20180418202435371480_node1_imagenet_NORMAL_wgan_gp_full_model_image_256/snap-40000.
|############--------| 61.50%, 66691/41605 sec. train epoch 5, iter 6150/10000, loss 0.038147, 0.09 batches/sec.

I use one K80 to train the network. 
I have a question about the training, how can I speed up the training? Thanks a lot!
@JiahuiYu
Copy link
Owner

I have tried 1080 Ti or P100 for training, the speed is around 0.4 batches/sec.
Regarding your case, several options are available:

  1. Use more GPUs for parallel training, a simple modification is required. Please refer to Questions about multi-gpu training #14
  2. Use low-resolution images will significantly increase speed. E.g., change IMG_SHAPES: [256, 256, 3] to [64, 64, 3] in inpaint.yml.

@zhiweige
Copy link
Author

okay, thanks for your suggestions. I will try it.

@tabsun
Copy link

tabsun commented Jul 23, 2019

I have changed Trainer to MultiGPUTrainer and use 4 x 1080 Ti for training, but the training speed seems slower than single GPU:(
|----------------------------| 0.10%, 4953/4947597 sec. train epoch 1, iter 10/10000, loss 1.013579, 0.00 batches
|----------------------------| 0.20%, 10009/5046506 sec. train epoch 1, iter 20/10000, loss 0.890207, 0.00 batche
|----------------------------| 0.30%, 15481/5455784 sec. train epoch 1, iter 30/10000, loss 0.796239, 0.00 batche
|----------------------------| 0.40%, 20848/5345027 sec. train epoch 1, iter 40/10000, loss 0.757341, 0.00 batche
|----------------------------| 0.50%, 26577/5700523 sec. train epoch 1, iter 50/10000, loss 0.715168, 0.00 batche
|----------------------------| 0.60%, 32215/5603806 sec. train epoch 1, iter 60/10000, loss 0.686893, 0.00 batche
|----------------------------| 0.70%, 37648/5395326 sec. train epoch 1, iter 70/10000, loss 0.657164, 0.00 batche
|----------------------------| 0.80%, 43417/5722711 sec. train epoch 1, iter 80/10000, loss 0.638888, 0.00 batches/sec.

Is there any point I have ignored? When using single GPU, it can run a little faster at 0.01 batches/sec.
Thank you

@JiahuiYu
Copy link
Owner

@tabsun Hi, thanks for your interest in our work first.

There must be some problems here, and here are several potential bugs:

  1. Your disk is very slow, which makes the data loading part blocked.
  2. Check your GPU utilization and make sure that all of them are running on different GPUs.

@tabsun
Copy link

tabsun commented Jul 25, 2019

@JiahuiYu Thanks for your advices. I have tried to move my training data into the same disk as my code. The speed has no change. What do you mean by GPU utilization ? I have checked all 4 GPU memory has been occupied, is this enough? Maybe I should deep into the training code to verify what has blocked the process.

@JiahuiYu
Copy link
Owner

JiahuiYu commented Aug 7, 2019

@tabsun You can view your GPU utilization by nvidia-smi. In most cases, the GPU utilization should be above 90%.

Sign up for free to join this conversation on GitHub. Already have an account? Sign in to comment
Labels
None yet
Projects
None yet
Development

No branches or pull requests

3 participants