Skip to content
New issue

Have a question about this project? Sign up for a free GitHub account to open an issue and contact its maintainers and the community.

By clicking “Sign up for GitHub”, you agree to our terms of service and privacy statement. We’ll occasionally send you account related emails.

Already on GitHub? Sign in to your account

Some problem about testing #4

Closed
C-water opened this issue Aug 5, 2021 · 6 comments
Closed

Some problem about testing #4

C-water opened this issue Aug 5, 2021 · 6 comments

Comments

@C-water
Copy link

C-water commented Aug 5, 2021

Hi
I changed all parameters to the default parameters. When running main.py, there are some problems:

Making model...
Loading model from ../experiment/test/model/model_x2.pt
Total params: 44.16M
Evaluation:
0it [00:00, ?it/s] [Set5 x4] PSNR: nan (Best: nan @epoch 1)
0it [00:00, ?it/s] [Set14 x4] PSNR: nan (Best: nan @epoch 1)
0it [00:00, ?it/s] [B100 x4] PSNR: nan (Best: nan @epoch 1)
0it [00:00, ?it/s] [Urban100 x4] PSNR: nan (Best: nan @epoch 1)

How to solve this problem?
Thank you.

@HarukiYqM
Copy link
Owner

Hi, the is because the path is not setting up correctly. After extracting the benchmark zip file, you should get a folder with name “benchmark”. Please set the —dir_data to a path pointing to the parent folder of benchmark in the demo.sh.

@C-water
Copy link
Author

C-water commented Aug 5, 2021

Thank you. In demo.sh, I deleted '--save_result'. If I keep '--save_result', there will still be other bugs.

In addition, in line 23 of attention.py, you use the Torch. Randn function, which generates random numbers every time. Don't random numbers influence the ultimate result? How do you avoid this problem?

@HarukiYqM
Copy link
Owner

HarukiYqM commented Aug 5, 2021

Hi, it should be bug free even with the —save_results flag. What is your error message?

For the second question, the random seed is fixed in main.py to control the randomness so that results should be identical for multiple runs.

In terms of LSH, the multi-round hashing makes results highly robust.

However, the results can still be very slightly different (+-0.01) according to the pytorch doc: “Completely reproducible results are not guaranteed across PyTorch releases, individual commits, or different platforms. Furthermore, results may not be reproducible between CPU and GPU executions, even when using identical seeds.”

@C-water
Copy link
Author

C-water commented Aug 5, 2021

Hi, I just verified that controlling the random seed 'seed=1' does limit randomness, and the output of torch. randn is the same.
However, will 'seed=1' also be fixed during training? If the output of torch. randn (shape) is the same during the training, isn't the rotation Angle in LSH fixed?

@HarukiYqM
Copy link
Owner

HarukiYqM commented Aug 5, 2021

At each iteration, the random number is random. You can verify this by run two consecutive randn. Results are different.

If you start the whole program again and use the same seed, the sequence of random number is fixed. That’s how it works.

@C-water
Copy link
Author

C-water commented Aug 5, 2021

Thank you, and I will try.

Sign up for free to join this conversation on GitHub. Already have an account? Sign in to comment
Labels
None yet
Projects
None yet
Development

No branches or pull requests

2 participants