-
Notifications
You must be signed in to change notification settings - Fork 70
New issue
Have a question about this project? Sign up for a free GitHub account to open an issue and contact its maintainers and the community.
By clicking “Sign up for GitHub”, you agree to our terms of service and privacy statement. We’ll occasionally send you account related emails.
Already on GitHub? Sign in to your account
The running time of model #4
Comments
@minghongli233 Hello, In Figure 9, we evaluated the testing time by using the official code, which has illustrated in the caption of Figure 9. Note that IDN employed the Caffe package. Did you use the warm-up when testing inference time? If you use time.time() in Python to get the testing time, this operation is very important. |
@minghongli233 The inference times of EDSR-baseline, CARN, and IMDN are tested by time.time() with warm-up. I recommend that you test them using the |
@Zheng222 Thank you for your reply. I evaluated the running time of IDN by using the Caffe package. If the warm-up is to point the first test will spend extra time , I have done what you said. The results are close to the last time. |
You can print the testing time of each image and you will find that the first image spends much more time than others. The warm-up simply adds an image into the test dataset and calculate the mean of running time except for the first value. |
@Zheng222 |
In your paper, the figure 9 shows that the speed of IDN is slower than the CARN and IMDN on Set5 for x4 SR.
However, I use the official code such as IDN, CARN and IMDN to evaluate the average inference time on Set5 x4 dataset. The average running time of these methods are 0.007s, 0.028s and 0.029s, respectively.
I find that the speed of IDN is faster than CARN and IMDN, and the running time of IMDN is close to CARN. I an confused about the result. Could you tell me the reasons?
My operation environment is as follows:
GPU: GTX 1080Ti
OS: Ubuntu 18.02 LTS
CUDA: 10.0
CUDNN: 7.4
Python version: 3.6
pytorch version: 1.0
Thank you!
The text was updated successfully, but these errors were encountered: