Skip to content
New issue

Have a question about this project? Sign up for a free GitHub account to open an issue and contact its maintainers and the community.

By clicking “Sign up for GitHub”, you agree to our terms of service and privacy statement. We’ll occasionally send you account related emails.

Already on GitHub? Sign in to your account

Problem about the running time. #20

Closed
tjulyz opened this issue Jan 3, 2018 · 2 comments
Closed

Problem about the running time. #20

tjulyz opened this issue Jan 3, 2018 · 2 comments

Comments

@tjulyz
Copy link

tjulyz commented Jan 3, 2018

Hi!
Thanks for your kind sharing! There is a problem when I running your code for Cifar10 classification. That is, when I change the kernel size of the convolutional layers in each block to 1x1 (from 3x3 to 1x1), the running time is about 4.11s for each epoch (from 3.05s to 4.11s) on Titan X. However, 3x3 convolution always consumes much computional resources than 1x1 convolution. So I am confused. Can you help analyze whether there is a problem in your code or in the tensorflow optimization?
Thanks again!

@ikhlestov
Copy link
Owner

Hi!
It's really strange behaviour. I've examined the code and I haven't find any mistakes. Of course it can highly depends on CUDA convolution and paralelization implementation by itself. You may print all existed shapes in the network with 3x3 kernels and 1x1 kernels, and after create dummy variables with tensorflow and with help of python timeit module just mesure execution time of the each component. Maybe this will point you in the right direction.

@tjulyz tjulyz changed the title Problem about the runting time. Problem about the running time. Jan 8, 2018
@tjulyz
Copy link
Author

tjulyz commented Jan 8, 2018

Thanks for your advice. I will try it again.

@tjulyz tjulyz closed this as completed Jan 8, 2018
Sign up for free to join this conversation on GitHub. Already have an account? Sign in to comment
Labels
None yet
Projects
None yet
Development

No branches or pull requests

2 participants