Skip to content
New issue

Have a question about this project? Sign up for a free GitHub account to open an issue and contact its maintainers and the community.

By clicking “Sign up for GitHub”, you agree to our terms of service and privacy statement. We’ll occasionally send you account related emails.

Already on GitHub? Sign in to your account

GPU and training time required by this repo #21

Closed
chenwydj opened this issue Jun 16, 2020 · 2 comments
Closed

GPU and training time required by this repo #21

chenwydj opened this issue Jun 16, 2020 · 2 comments

Comments

@chenwydj
Copy link

Dear authors,

Thank you very much for this dedicated reop! This is extremely helpful to the WSOL community!

Some questions:

  1. Are jobs covered by this repo all only require one GPU to train and evaluate? Is it helpful or necessary to use multiple GPUs?
  2. What kind of GPU did you use for the jobs? How much memory did you consume?
  3. How much time to train on each dataset?

Thank you again!

@junsukchoe
Copy link
Collaborator

Thanks for your interest in our work!

  1. Yes, we use only one GPU per session. If you use multiple GPUs, you can train the models more quickly. However, you probably need to tune the hyperparameters accordingly.

  2. We used NVIDIA P40 GPU (24Gb VRAM). The memory requirement depends on the dataset, batch size, and methods. For example, on the OpenImages dataset, CAM requires about 4.5Gb memory, and ACoL and SPG need about 6.5Gb memory when the batch size is 32.

  3. Training time depends heavily on the hardware environment. In our environment, CUB experiments take 2-3hours, OpenImages experiments take 6-8 hours, ImageNet (proxy) experiments take 6-8 hours, and ImageNet (full) experiments take 36-48 hours.

I hope this helps you.

@chenwydj
Copy link
Author

Thank you!

Sign up for free to join this conversation on GitHub. Already have an account? Sign in to comment
Labels
None yet
Projects
None yet
Development

No branches or pull requests

2 participants