-
Notifications
You must be signed in to change notification settings - Fork 194
New issue
Have a question about this project? Sign up for a free GitHub account to open an issue and contact its maintainers and the community.
By clicking “Sign up for GitHub”, you agree to our terms of service and privacy statement. We’ll occasionally send you account related emails.
Already on GitHub? Sign in to your account
Option to manually set random seed globally #76
Comments
The randomness probably comes from DGL sampler. You can try this. https://docs.dgl.ai/en/0.4.x/generated/dgl.random.seed.html |
Thanks for the reply! |
If it's multithreading or multiprocessing, I think it's impossible to get it reproducible. I'm not sure if any of the GPU parallel computation can make it non-deterministic as well. |
Thanks for your help!
|
Good to know. Thanks for showing us how to make the training deterministic. It'll be useful for future users. |
Does the second operation matter? And how to set num_thread and num_proc to be 1? Thank you. |
@Megavoxel01, How to set num_threads and num_proc to be 1 and Set OMP_NUM_THREADS=1? |
@Megavoxel01 can you please give more detailed information (actual files to modify and code) to get a deterministic model? Or is there any chance that it has been integrated into dgl-ke since then? Thanks in advance |
Hi!
Thanks for this awesome package! I'm wondering if there is any option available to fix the manual seed so I can reproduce same results across different trainning outputs. Currently I try to manually set the random seeds for pytorch and numpy under train_pytorch.py and dataloader/sampler.py but the final output embeddings of multiple trainning attempts are still different. Is there any workaround for this?
Thanks for any help in advance.
The text was updated successfully, but these errors were encountered: