You signed in with another tab or window. Reload to refresh your session.You signed out in another tab or window. Reload to refresh your session.You switched accounts on another tab or window. Reload to refresh your session.Dismiss alert
{{ message }}
This repository has been archived by the owner on Aug 5, 2022. It is now read-only.
Hi, thank you for great work. It runs faster than original torch on Xeon.
However, I'm having one trouble while trying to load trained .t7 files. These .t7 files are trained on GPGPU torch but converted for CPU. Original torch on CPU (without GPGPU) works well with these .t7 files. However, Intel torch shows following error message.
$ th ...
| loading model file...
/home/.../torch/inteltorch/install/bin/lua: .../torch/inteltorch/install/share/lua/5.2/torch/
File.lua:301: Failed to load function from bytecode: binary string: not a precompiled chunk
stack traceback:
[C]: in function 'error'
.../torch/inteltorch/install/share/lua/5.2/torch/File.lua:301: in function 'readObject'
.../torch/inteltorch/install/share/lua/5.2/torch/File.lua:369: in function 'readObject'
Any idea? Thanks.
The text was updated successfully, but these errors were encountered:
hi kmarukawa
Thanks for your feedback. When runing imagenet classification, we could load the snapshot files(.t7 files) without any error.
From your log ,the luajit version you installed is 5.2. Could you please install luajit2.1(the default version) and try again?
Hi xhzhao,
Thank you for the suggestion. You are right. I unfortunately compiled it with lua 5.2 while I was trying to port GPGPU program into this Intel Torch. I recompiled Intel torch from scratch again. It works fine. Thanks.
Hi, thank you for great work. It runs faster than original torch on Xeon.
However, I'm having one trouble while trying to load trained .t7 files. These .t7 files are trained on GPGPU torch but converted for CPU. Original torch on CPU (without GPGPU) works well with these .t7 files. However, Intel torch shows following error message.
Any idea? Thanks.
The text was updated successfully, but these errors were encountered: