Skip to content
New issue

Have a question about this project? Sign up for a free GitHub account to open an issue and contact its maintainers and the community.

By clicking “Sign up for GitHub”, you agree to our terms of service and privacy statement. We’ll occasionally send you account related emails.

Already on GitHub? Sign in to your account

torch-distro luajit using system wide files #1

Closed
bmartini opened this issue Dec 9, 2014 · 4 comments
Closed

torch-distro luajit using system wide files #1

bmartini opened this issue Dec 9, 2014 · 4 comments

Comments

@bmartini
Copy link

bmartini commented Dec 9, 2014

Tried to follow the instructions in https://github.com/soumith/torch-ship-binaries with one difference which was to use the torch-distro/install/bin/luajit excutiable instead of the system wide luajit. However, found that the torch-distro luajit is using the system 'bcsave.lua' file instead of the local one.

@soumith
Copy link
Member

soumith commented Dec 9, 2014

oh good to know. I actually dont have any system luajit, so it definitely uses the self-contained bcsave.lua
did you find a fix for your case?

@bmartini
Copy link
Author

bmartini commented Dec 9, 2014

I just uninstalled/removed the system luajit/lua/bcsave.lua and got it to
work.

On Mon, Dec 8, 2014 at 11:34 PM, Soumith Chintala notifications@github.com
wrote:

oh good to know. I actually dont have any system luajit, so it definitely
uses the self-contained bcsave.lua
did you find a fix for your case?


Reply to this email directly or view it on GitHub
#1 (comment).

@soumith
Copy link
Member

soumith commented Dec 9, 2014

ok, i dont think i'll look into this further, hard to tell what luajit does.

@soumith soumith closed this as completed Dec 9, 2014
@bmartini
Copy link
Author

bmartini commented Dec 9, 2014

Agreed, I got it to work so I'm happy.

On Tue, Dec 9, 2014 at 12:23 AM, Soumith Chintala notifications@github.com
wrote:

ok, i dont think i'll look into this further, hard to tell what luajit
does.


Reply to this email directly or view it on GitHub
#1 (comment).

soumith pushed a commit that referenced this issue Nov 13, 2016
Remove tail space in *.cmd file to avoid error
borisfom added a commit to NVIDIA/torch-distro that referenced this issue Dec 12, 2016
097a4be Merge pull request torch#280 from soumith/functional-findex
4e95c7d use FindEx in functional
a5fb4c0 Merge pull request torch#273 from NVIDIA/fp16
26e91e9 debug diagnostic fixes. true fp16 disabled for now
0f78bac Increasing buffer for getConvolutionDescriptor
089da19 Fixing refactored methods
616d867 Merge remote-tracking branch 'upstream/master' into fp16
127fabc Added new refactoring for convolution and filter descriptors
e08c0e4 Code review changes
29b0082 Merge remote-tracking branch 'upstream/master' into fp16
1f358d7 clone output tensor in FindEx
068a0d2 make VolumetricFullConvolution use find
94eb9ba functional tests pass
942d796 reset algo family on size change
ba9513c merging master
cb02776 Merge pull request torch#1 from NVIDIA/syncRNN
5c5ddcb Restored old behaviour of cudnn.benchmark
ceabc2c fixes for RNN with the new workspace mechanisms
1447ce0 add syncs to RNN
763a348 added cudnn.useFloatMathForHalf
156b7ed Fixed sticky algo modes
9465aae Revamped workspace handling in find.lua Retired functional.lua: impossible to maintain consistently with Find. Simplified FindEx state machine: replaced witgh warmup iterations concept, controllable by user. FindEx still needs some work. Improved cache handling and debug print
a17af4f Restoring test
9e330de Merge branch 'master' of https://github.com/NVIDIA/cudnn.torch
b7b02c5 Adjusting half precision
2f394a4 Adjusting half precision
7cd1652 Merge remote-tracking branch 'bf/find_ex'
89abf80 Merge remote-tracking branch 'upstream/master' into find_ex
b146c80 debug print fixed
827c7f1 Merge remote-tracking branch 'bf/find_ex' into nv
d09d4e2 Refactoring for clarity and less allocations
20d3751 Debugging fallback, cleanup
92ab959 Addressed code review comments
71586fb Fixed fallback - fp16 is fully working now
5afec3b FP16 to 32 fallback implemented
243a8fb Stream awareness restored. Better WS encapsulation
1977b5f Stream awareness restored. Better WS encapsulation
f8ef44d Stream-aware version
db6e634 FindEx implementetation + refactoring, take 3
REVERT: 4fc4e90 Merge branch 'fast_fp16'
REVERT: 29ea6a7 Exported cudnn.configureMath method. Overridden FP16 math for test
REVERT: 696f304 Merge remote-tracking branch 'upstream/master'

git-subtree-dir: extra/cudnn
git-subtree-split: 097a4bee165b12ee9e5060549a361a043f4db541
Sign up for free to join this conversation on GitHub. Already have an account? Sign in to comment
Labels
None yet
Projects
None yet
Development

No branches or pull requests

2 participants