Skip to content
This repository has been archived by the owner on Mar 15, 2024. It is now read-only.

Why should we set different seed per gpu with DDP? #150

Closed
developer0hye opened this issue Mar 3, 2022 · 2 comments
Closed

Why should we set different seed per gpu with DDP? #150

developer0hye opened this issue Mar 3, 2022 · 2 comments

Comments

@developer0hye
Copy link
Contributor

developer0hye commented Mar 3, 2022

deit/main.py

Line 182 in 35cd455

seed = args.seed + utils.get_rank()

Hi!

I recently referenced this project to implement DDP training code, and I can successfully implement and reproduce the performance of our model, thanks to this project.

I have one question related to DDP.

Why should we set different seed per gpu with DDP?

@developer0hye developer0hye changed the title Why should we set different seed per gpus with DDP? Why should we set different seed per gpu with DDP? Mar 3, 2022
@TouvronHugo
Copy link
Contributor

Hi @developer0hye,
Thanks for your question,
It is to be sure that the operations where we can be led to draw random values as for instance for data-augmentations will not be exactly the same between the different GPUs.
Best,
Hugo

@developer0hye
Copy link
Contributor Author

@TouvronHugo
Hi @TouvronHugo
I got it! Thanks!

Sign up for free to subscribe to this conversation on GitHub. Already have an account? Sign in.
Labels
None yet
Projects
None yet
Development

No branches or pull requests

2 participants