-
Notifications
You must be signed in to change notification settings - Fork 19
New issue
Have a question about this project? Sign up for a free GitHub account to open an issue and contact its maintainers and the community.
By clicking “Sign up for GitHub”, you agree to our terms of service and privacy statement. We’ll occasionally send you account related emails.
Already on GitHub? Sign in to your account
Some problem about testing #4
Comments
Hi, the is because the path is not setting up correctly. After extracting the benchmark zip file, you should get a folder with name “benchmark”. Please set the —dir_data to a path pointing to the parent folder of benchmark in the demo.sh. |
Thank you. In demo.sh, I deleted '--save_result'. If I keep '--save_result', there will still be other bugs. In addition, in line 23 of attention.py, you use the Torch. Randn function, which generates random numbers every time. Don't random numbers influence the ultimate result? How do you avoid this problem? |
Hi, it should be bug free even with the —save_results flag. What is your error message? For the second question, the random seed is fixed in main.py to control the randomness so that results should be identical for multiple runs. In terms of LSH, the multi-round hashing makes results highly robust. However, the results can still be very slightly different (+-0.01) according to the pytorch doc: “Completely reproducible results are not guaranteed across PyTorch releases, individual commits, or different platforms. Furthermore, results may not be reproducible between CPU and GPU executions, even when using identical seeds.” |
Hi, I just verified that controlling the random seed 'seed=1' does limit randomness, and the output of torch. randn is the same. |
At each iteration, the random number is random. You can verify this by run two consecutive randn. Results are different. If you start the whole program again and use the same seed, the sequence of random number is fixed. That’s how it works. |
Thank you, and I will try. |
Hi
I changed all parameters to the default parameters. When running main.py, there are some problems:
Making model...
Loading model from ../experiment/test/model/model_x2.pt
Total params: 44.16M
Evaluation:
0it [00:00, ?it/s] [Set5 x4] PSNR: nan (Best: nan @epoch 1)
0it [00:00, ?it/s] [Set14 x4] PSNR: nan (Best: nan @epoch 1)
0it [00:00, ?it/s] [B100 x4] PSNR: nan (Best: nan @epoch 1)
0it [00:00, ?it/s] [Urban100 x4] PSNR: nan (Best: nan @epoch 1)
How to solve this problem?
Thank you.
The text was updated successfully, but these errors were encountered: