Skip to content
This repository has been archived by the owner on Oct 31, 2023. It is now read-only.

Single GPU Training #30

Closed
nixczhou opened this issue Jan 9, 2022 · 3 comments
Closed

Single GPU Training #30

nixczhou opened this issue Jan 9, 2022 · 3 comments

Comments

@nixczhou
Copy link

nixczhou commented Jan 9, 2022

In the readme, it stated that,

Only multi-gpu, DistributedDataParallel training is supported; single-gpu or DataParallel training is not supported.

But when I saw the code, i think we can choose to not using DistributedDataParallel, does using the single gpu will affect any performance?

@endernewton
Copy link
Contributor

Single GPU training is possible and doable for debugging. You can just comment that line (https://github.com/facebookresearch/simsiam/blob/main/main_simsiam.py#L183) so it does not raise an error.

Single GPU affects performance mainly due to batch size -- cannot fit many images into one single GPU. Otherwise if you manage to train on single GPU, let me know!

@nixczhou
Copy link
Author

Just curious, small Batch Size like 64 ~ 256 does not matter for the SimSiam model , isn't it?

@dikapiliao1
Copy link

Can you train with a single GPU?

Sign up for free to subscribe to this conversation on GitHub. Already have an account? Sign in.
Labels
None yet
Projects
None yet
Development

No branches or pull requests

3 participants