Skip to content
New issue

Have a question about this project? Sign up for a free GitHub account to open an issue and contact its maintainers and the community.

By clicking “Sign up for GitHub”, you agree to our terms of service and privacy statement. We’ll occasionally send you account related emails.

Already on GitHub? Sign in to your account

'Failed to load function from bytecode:' Error while loading model file #938

Closed
ShreyasSkandan opened this issue Feb 15, 2017 · 1 comment

Comments

@ShreyasSkandan
Copy link

Hi,

So i'm currently trying to load a network model via torch on an Nvidia TX1. When I try to load the model

net = torch.load('modelfile.t7','ascii')

I get the following error:

bytecode-error

The model loads fine on my Ubuntu 14.04 desktop, so I tried loading the same model, converting it to binary and then trying to load the converted file

net = torch.load('modelfile.bin')

But i still get a similar error:

binary_error_model

I've noticed that a few people have had the same errors in the past but most people seem to have been able to get past this by using an 'ascii' version of the model since it's platform independent (?). I seem to have had no luck with that. The other set of individuals who faced this problem were on a 32bit system. But my Nvidia TX1 is currently running on Ubuntu 16.04 (64bit).

For anyone willing to recreate these results:

I installed JetPack (JetPack-L4T-2.3.1-linux-x64.run) and verified that my installation of CUDA 8.0 and OpenCV is functional.

For Torch, I used dusty-nv's installation script
https://github.com/dusty-nv/jetson-reinforcement
The installation script in particular is https://github.com/dusty-nv/jetson-reinforcement/blob/master/CMakePreBuild.sh
It all looks pretty straightforward.

And this is the code that i'm trying to run on the TX1
https://github.com/jzbontar/mc-cnn
The model file in specific is https://s3.amazonaws.com/mc-cnn/net_kitti_fast_-a_train_all.t7

Any tips on how to fix this problem is gladly appreciated. If anyone has any ideas on how I can tweak the model on my desktop machine to make it work here I'd love to hear it.

Thanks in advance,

Shreyas

@ShreyasSkandan
Copy link
Author

I figured out what was causing this issue and documented a possible fix in THIS POST.

Sign up for free to join this conversation on GitHub. Already have an account? Sign in to comment
Labels
None yet
Projects
None yet
Development

No branches or pull requests

1 participant