-
Notifications
You must be signed in to change notification settings - Fork 224
New issue
Have a question about this project? Sign up for a free GitHub account to open an issue and contact its maintainers and the community.
By clicking “Sign up for GitHub”, you agree to our terms of service and privacy statement. We’ll occasionally send you account related emails.
Already on GitHub? Sign in to your account
usemat acceleration in current version? #70
Comments
In SConstruct comment this line: |
In earlier versions of Eigen, the matrix code was more optimized than the tensor code, but that isn't true anymore, so the matrix version isn't needed anymore. |
Tom, |
The speed of matrix vs tensor depends crucially on the version of Eigen; the version that you have installed may be too old. Can you install Eigen from github source and then do the benchmarks again? (Most of the work of optimizing Eigen these days goes into the tensor code, since that's what other deep learning frameworks use.) |
I forgot to mention that I tested this with Eigen 3.3 rc1. |
That sounds odd. I benchmarked both versions against each other before switching. Recent versions of Eigen really shouldn't have big differences in linear algebra performance between tensor and matrix; after all, the tensor code drives much of Google's TensorFlow. Is it possible your matrix code is running multicore? Checking with htop while it's running should give you some idea. |
I checked with htop. Only one cpu run 100%, the others are idle / <2% most of the time. |
Hmmm... I'm not sure. At this point, all I can say is that when I made the switch, the two performed pretty much identically to each other, and basically gave the same performance as a good BLAS implementation. Whatever the cause is, it ought to be fixable without switching back to the Eigen matrix backend. I'll leave the bug open and see whether I can reproduce&find a quick fix. |
Make sure you are testing cpu only without gpu involvement. |
Weird, with the latest code from master I get step times of ~0.3s with the uw3-500 dataset, while with the a588c8 version they are between 6.5s and 10.5s, i.e. the matrix code is significantly slower. Both were compiled with eigen 3.3beta2 on Debian unstable. edit: Using the latest eigen checkout, the performance difference still remains. |
I get ~0.8s with the uw3-500 dataset
I get ~0.3s... |
The slowness of the matrix code was due to me using So it seems that to get comparable performance, the compiler flags have to be tuned for the tensor version. |
What's the CPU in each case? On Wed, Oct 12, 2016, 13:10 Johannes Baiter notifications@github.com
|
Here's an excerpt of my
Note that there's a turbo mode where the CPU clock goes up to 3.1Ghz, which is what was active when I benchmarked clstm. Compiler version:
|
I can confirm that installing the latest code with |
My cpu:
gcc version 5.4.0 |
This issue has not been fixed yet. Every user of the master branch will suffer from the slowness, unless he/she builds with A fix in SConstruct is needed. |
In the previous version, one can accelerate calculation using "usemat=1" during installation, which use matrix instead of tensor. But in the current version, the matrix alternative is not supported. Is this means the tensor version is as fast now? Why matrix part is removed?
Best
The text was updated successfully, but these errors were encountered: