-
Notifications
You must be signed in to change notification settings - Fork 74k
New issue
Have a question about this project? Sign up for a free GitHub account to open an issue and contact its maintainers and the community.
By clicking “Sign up for GitHub”, you agree to our terms of service and privacy statement. We’ll occasionally send you account related emails.
Already on GitHub? Sign in to your account
Word2vec on GPU slower than CPU #13048
Comments
A few thoughts: I don't see anything in the word2vec code that suggests that it's been optimized to work on a GPU. This question is better asked on StackOverflow since it is not a bug or feature request. There is also a larger community that reads questions there. Thanks! |
@cy89 do you know of a more optimized implementation of word2vec in TensorFlow (less tutorial-ish)? |
Same problem. Word2vec cpu is 10 times faster than word2vec gpu. Yes, it is very surprising! But this is what I got. Both cpu and gpu are slow. Waste a lot of time studying the code and modifying the code. |
Hello, I have the same problem on TensorFlow 1.8 running word2vec_optimized.py on a system with Volta GPUs. Rgds, |
System information
Describe the problem
I have been working on benchmarking commonly used frameworks/libraries for unsupervised learning of word embeddings(word2vec). I am currently comparing tensorflow(cpu/gpu), gensim, deeplearning4j and the original c code on standard metrics like training time, peak memory usage and quality of learned vectors. Link to my github repo (still working on it). I ran the benchmark on text8 corpus(plan to run it on a much larger corpus later for the true picture) which gave me strange results.
Is this behavior expected? Would appreciate any inputs.
Source code / logs
Link to tensorflow code
Link to results of sample benchmark on text8 corpus
The text was updated successfully, but these errors were encountered: