Skip to content
New issue

Have a question about this project? Sign up for a free GitHub account to open an issue and contact its maintainers and the community.

By clicking “Sign up for GitHub”, you agree to our terms of service and privacy statement. We’ll occasionally send you account related emails.

Already on GitHub? Sign in to your account

Error When Loading the Model #14

Open
creatornadiran opened this issue Sep 3, 2021 · 2 comments
Open

Error When Loading the Model #14

creatornadiran opened this issue Sep 3, 2021 · 2 comments

Comments

@creatornadiran
Copy link

OS: Windows 11
Package: Anaconda
pytorch_mobile: ^0.2.1
PyTorch : 1.8.1

I'm getting this error when trying to load the model:

E/PyTorchMobile( 7949): assets/MobileNetV3_2.pt is not a proper model
E/PyTorchMobile( 7949): com.facebook.jni.CppException: [enforce fail at inline_container.cc:222] . file >not found: archive/constants.pkl

All my save types:

  quantized_model = torch.quantization.convert(mobilenet)
  scripted_model = torch.jit.script(quantized_model)
  torch.save(scripted_model.state_dict(), "model_output/MobileNetV3_1.pt")
  torch.jit.save(scripted_model, "model_output/MobileNetV3_2.pt")
  scripted_model.save("model_output/MobileNetV3_3.pt")
  opt_model = optimize_for_mobile(scripted_model)
  opt_model._save_for_lite_interpreter("model_output/MobileNetV3_4.ptl")

I tried all the files and got the same error. Is there a version issue? How can I save my model properly?

@creatornadiran creatornadiran changed the title Error When Loading Error When Loading the Model Sep 3, 2021
@palakons
Copy link

palakons commented Nov 9, 2022

did you have the issue resolved?

you could try with the official .pt

If it works, it might be because the .pt file you supplied earlier was in a different (newer) torch version

@JenkinsGage
Copy link

JenkinsGage commented Nov 27, 2022

I solved it by

    # model is loaded into cpu

    model.eval()
    quantized_model = torch.quantization.convert(model)
    scripted_model = torch.jit.script(quantized_model)
    opt_model = optimize_for_mobile(scripted_model)
    opt_model.save('model.pt')

both load model and predict runs well for me and here is my platform info
OS: Windows 10
Device: Android 10
PyTorch: 1.13.0
pytorch_mobile: ^0.2.2

Sign up for free to join this conversation on GitHub. Already have an account? Sign in to comment
Labels
None yet
Projects
None yet
Development

No branches or pull requests

3 participants