Skip to content
New issue

Have a question about this project? Sign up for a free GitHub account to open an issue and contact its maintainers and the community.

By clicking “Sign up for GitHub”, you agree to our terms of service and privacy statement. We’ll occasionally send you account related emails.

Already on GitHub? Sign in to your account

The running time of model #4

Closed
minghongli233 opened this issue Oct 12, 2019 · 5 comments
Closed

The running time of model #4

minghongli233 opened this issue Oct 12, 2019 · 5 comments

Comments

@minghongli233
Copy link

In your paper, the figure 9 shows that the speed of IDN is slower than the CARN and IMDN on Set5 for x4 SR.
However, I use the official code such as IDN, CARN and IMDN to evaluate the average inference time on Set5 x4 dataset. The average running time of these methods are 0.007s, 0.028s and 0.029s, respectively.
I find that the speed of IDN is faster than CARN and IMDN, and the running time of IMDN is close to CARN. I an confused about the result. Could you tell me the reasons?
My operation environment is as follows:

GPU: GTX 1080Ti
OS: Ubuntu 18.02 LTS
CUDA: 10.0
CUDNN: 7.4
Python version: 3.6
pytorch version: 1.0

Thank you!

@Zheng222
Copy link
Owner

@minghongli233 Hello, In Figure 9, we evaluated the testing time by using the official code, which has illustrated in the caption of Figure 9. Note that IDN employed the Caffe package. Did you use the warm-up when testing inference time? If you use time.time() in Python to get the testing time, this operation is very important.

@Zheng222
Copy link
Owner

Zheng222 commented Oct 13, 2019

@minghongli233 The inference times of EDSR-baseline, CARN, and IMDN are tested by time.time() with warm-up. I recommend that you test them using the test_IMDN.py, which uses a more accurate testing method. Please check the pressure test in README.md.

@minghongli233
Copy link
Author

@Zheng222 Thank you for your reply. I evaluated the running time of IDN by using the Caffe package. If the warm-up is to point the first test will spend extra time , I have done what you said. The results are close to the last time.
So, I doubt that the operation environment possibly impact the speed. Could you tell me the operation environment of your test?

@Zheng222
Copy link
Owner

@minghongli233

OS: Ubuntu 16.04
Pytorch: 1.0
GPU: GTX 1080Ti
CUDA: 9.0
CUDNN: 7.4

You can print the testing time of each image and you will find that the first image spends much more time than others. The warm-up simply adds an image into the test dataset and calculate the mean of running time except for the first value.

@minghongli233
Copy link
Author

@Zheng222
I try to use the other operation environment as fllows :
OS: Ubuntu 18.04
Pytorch: 0.4
GPU: GTX 1080Ti
CUDA: 8.0
CUDNN: 5.1
The average running time of CARN is 0.006s. Now, the average inference time is close to the report in your paper ! The operation environment does impact the performance!
Thank you for your answer!

Sign up for free to join this conversation on GitHub. Already have an account? Sign in to comment
Labels
None yet
Projects
None yet
Development

No branches or pull requests

2 participants