Skip to content
New issue

Have a question about this project? Sign up for a free GitHub account to open an issue and contact its maintainers and the community.

By clicking “Sign up for GitHub”, you agree to our terms of service and privacy statement. We’ll occasionally send you account related emails.

Already on GitHub? Sign in to your account

Slow sequence generation compared to TF #340

Open
wichtounet opened this issue Sep 14, 2016 · 5 comments
Open

Slow sequence generation compared to TF #340

wichtounet opened this issue Sep 14, 2016 · 5 comments

Comments

@wichtounet
Copy link

Hi,

I tried porting a RNN network (2 layers of LSTM) from tensorflow to tflearn to reduce the complexity of the code. Everything works fine, but I have seen that the sequence generation (via generate()) is much slower than when I was using Tensorflow (m.sample()).

On a CPU-only machine. For instance, for generating a sequence of length 1000 (with a 128 LSTM -> 128 LSTM network):

Tensorflow: 29.690ms
TFLearn: 99.668ms

(3.35 times slower)

and a sequence of length 100:
Tensorflow: 106.929ms
TFLearn: 1005.741ms

(9.40 times slower)

It's almost one order of magnitude slower for a sequence of length 100._

Maybe the two functions don't do the same thing ?
Is there something I can configure to speed up generation ?

Thanks

@aymericdamien
Copy link
Member

How is your generation in TensorFlow. You can find TFlearn generation here:
https://github.com/tflearn/tflearn/blob/master/tflearn/models/generator.py#L182
It may have an issue slowing down the process

@wichtounet
Copy link
Author

@aymericdamien I'll try to compare the Python part of the code in both case and see if can find the difference.

@aymericdamien
Copy link
Member

cool! let me know

@wichtounet
Copy link
Author

I've looked deeper at the issue, but the problem is not related to sequence generation. All the time is spent in Evaluator.predict. Is it possible that the prediction is slower on tflearn than on TF ?

I've tried comparing the codes in both case, but I'm really not familiar enough with Python, numpy and TF to get the differences. The prediction code was quite short on my TF project, but there is a lot of code in Evaluator.

I guess that in the end, it all comes down to sess.run ? Maybe it is a difference on the graph that is built.

@aymericdamien
Copy link
Member

I see, maybe for now you can use a custom inference function if you want a speed-up. I will try to investigate TFLearn predict case.

Sign up for free to join this conversation on GitHub. Already have an account? Sign in to comment
Labels
None yet
Projects
None yet
Development

No branches or pull requests

2 participants