Skip to content
New issue

Have a question about this project? Sign up for a free GitHub account to open an issue and contact its maintainers and the community.

By clicking “Sign up for GitHub”, you agree to our terms of service and privacy statement. We’ll occasionally send you account related emails.

Already on GitHub? Sign in to your account

usemat acceleration in current version? #70

Open
crazylyf opened this issue Mar 18, 2016 · 17 comments
Open

usemat acceleration in current version? #70

crazylyf opened this issue Mar 18, 2016 · 17 comments

Comments

@crazylyf
Copy link

In the previous version, one can accelerate calculation using "usemat=1" during installation, which use matrix instead of tensor. But in the current version, the matrix alternative is not supported. Is this means the tensor version is as fast now? Why matrix part is removed?

Best

@amitdo
Copy link
Contributor

amitdo commented Oct 11, 2016

In SConstruct comment this line:
env.Append(CPPDEFINES={'CLSTM_ALL_TENSOR': '1'})

@tmbdev
Copy link
Owner

tmbdev commented Oct 11, 2016

In earlier versions of Eigen, the matrix code was more optimized than the tensor code, but that isn't true anymore, so the matrix version isn't needed anymore.

@amitdo
Copy link
Contributor

amitdo commented Oct 12, 2016

Tom,
So why is a588c8 (matrix only, no tensor) is more than 2.5 faster in training compared to the tip of the repo?
Tested on my 4 years old PC, intel i5 - 4 cores, 8 GB RAM, without dedicated GPU.

@tmbdev
Copy link
Owner

tmbdev commented Oct 12, 2016

The speed of matrix vs tensor depends crucially on the version of Eigen; the version that you have installed may be too old. Can you install Eigen from github source and then do the benchmarks again?

(Most of the work of optimizing Eigen these days goes into the tensor code, since that's what other deep learning frameworks use.)

@amitdo
Copy link
Contributor

amitdo commented Oct 12, 2016

I forgot to mention that I tested this with Eigen 3.3 rc1.
On Ubuntu 16.04 (gcc 5.1 5.4).

@tmbdev
Copy link
Owner

tmbdev commented Oct 12, 2016

That sounds odd. I benchmarked both versions against each other before switching. Recent versions of Eigen really shouldn't have big differences in linear algebra performance between tensor and matrix; after all, the tensor code drives much of Google's TensorFlow.

Is it possible your matrix code is running multicore? Checking with htop while it's running should give you some idea.

@amitdo
Copy link
Contributor

amitdo commented Oct 12, 2016

I checked with htop. Only one cpu run 100%, the others are idle / <2% most of the time.

@tmbdev
Copy link
Owner

tmbdev commented Oct 12, 2016

Hmmm... I'm not sure. At this point, all I can say is that when I made the switch, the two performed pretty much identically to each other, and basically gave the same performance as a good BLAS implementation. Whatever the cause is, it ought to be fixable without switching back to the Eigen matrix backend. I'll leave the bug open and see whether I can reproduce&find a quick fix.

@amitdo
Copy link
Contributor

amitdo commented Oct 12, 2016

Make sure you are testing cpu only without gpu involvement.

@jbaiter
Copy link

jbaiter commented Oct 12, 2016

Weird, with the latest code from master I get step times of ~0.3s with the uw3-500 dataset, while with the a588c8 version they are between 6.5s and 10.5s, i.e. the matrix code is significantly slower. Both were compiled with eigen 3.3beta2 on Debian unstable.

edit: Using the latest eigen checkout, the performance difference still remains.

@amitdo
Copy link
Contributor

amitdo commented Oct 12, 2016

with the latest code from master I get step times of ~0.3s with the uw3-500 dataset

I get ~0.8s with the uw3-500 dataset

while with the a588c8 version they are between 6.5s and 10.5s

I get ~0.3s...

@jbaiter
Copy link

jbaiter commented Oct 12, 2016

The slowness of the matrix code was due to me using debug=-1 (which enables amongst others -Ofast). With the default the matrix version runs at ~0.3s and the tensor version is between 0.5 and 0.9s, i.e. the same as on your machine (I'm running this on an i5 clocked at 3.1ghz).

So it seems that to get comparable performance, the compiler flags have to be tuned for the tensor version.

@tmbdev
Copy link
Owner

tmbdev commented Oct 12, 2016

What's the CPU in each case?

On Wed, Oct 12, 2016, 13:10 Johannes Baiter notifications@github.com
wrote:

The slowness of the matrix code was due to me using debug=-1 (which
enables amongst others -Ofast). With the default the matrix version runs
at ~0.3s and the tensor version is between 0.5 and 0.9s.

So it seems that to get comparable performance, the compiler flags have to
be tuned for the tensor version.


You are receiving this because you commented.
Reply to this email directly, view it on GitHub
#70 (comment), or mute
the thread
https://github.com/notifications/unsubscribe-auth/AAUYPy4cNYHrnQxYDVD4vbhNspQaNgQiks5qzT7HgaJpZM4Hzjzs
.

@jbaiter
Copy link

jbaiter commented Oct 12, 2016

Here's an excerpt of my /proc/cpuinfo:

vendor_id       : GenuineIntel
cpu family      : 6
model           : 58
model name      : Intel(R) Core(TM) i5-3210M CPU @ 2.50GHz
stepping        : 9
microcode       : 0x1c
cpu MHz         : 2899.902
cache size      : 3072 KB
physical id     : 0
siblings        : 4
core id         : 1
cpu cores       : 2
apicid          : 3
initial apicid  : 3
fpu             : yes
fpu_exception   : yes
cpuid level     : 13
wp              : yes
flags           : fpu vme de pse tsc msr pae mce cx8 apic sep mtrr pge mca cmov pat pse36 clflush dts acpi mmx fxsr sse sse2 ss ht tm pbe syscall nx rdtscp lm constant_tsc arch_perfmon pebs bts rep_good nopl xtopology nonstop_tsc aperfmperf eagerfpu pni pclmulqdq dtes64 monitor ds_cpl vmx est tm2 ssse3 cx16 xtpr pdcm pcid sse4_1 sse4_2 x2apic popcnt tsc_deadline_timer aes xsave avx f16c rdrand lahf_lm epb tpr_shadow vnmi flexpriority ept vpid fsgsbase smep erms xsaveopt dtherm ida arat pln pts
bugs            :
bogomips        : 4989.61
clflush size    : 64
cache_alignment : 64
address sizes   : 36 bits physical, 48 bits virtual

Note that there's a turbo mode where the CPU clock goes up to 3.1Ghz, which is what was active when I benchmarked clstm.

Compiler version:

g++ (Debian 6.2.0-5) 6.2.0 20160927
Copyright (C) 2016 Free Software Foundation, Inc.
This is free software; see the source for copying conditions.  There is NO
warranty; not even for MERCHANTABILITY or FITNESS FOR A PARTICULAR PURPOSE.

@amitdo
Copy link
Contributor

amitdo commented Oct 12, 2016

I can confirm that installing the latest code with scons debug=-1, I get the same performance as the matrix based code.

@amitdo
Copy link
Contributor

amitdo commented Oct 12, 2016

My cpu:

model name  : Intel(R) Core(TM) i5-3450 CPU @ 3.10GHz
stepping    : 9
cpu MHz     : 2462.683
cache size  : 6144 KB
cpu cores   : 4

gcc version 5.4.0

@amitdo
Copy link
Contributor

amitdo commented Jun 21, 2017

This issue has not been fixed yet. Every user of the master branch will suffer from the slowness, unless he/she builds with debug=-1.

A fix in SConstruct is needed.

Sign up for free to join this conversation on GitHub. Already have an account? Sign in to comment
Labels
None yet
Projects
None yet
Development

No branches or pull requests

4 participants