Skip to content
New issue

Have a question about this project? Sign up for a free GitHub account to open an issue and contact its maintainers and the community.

By clicking “Sign up for GitHub”, you agree to our terms of service and privacy statement. We’ll occasionally send you account related emails.

Already on GitHub? Sign in to your account

conversion to non-scalar type torch::jit::load("model.pt") #22382

Closed
caxelos opened this issue Jun 30, 2019 · 4 comments
Closed

conversion to non-scalar type torch::jit::load("model.pt") #22382

caxelos opened this issue Jun 30, 2019 · 4 comments
Labels
oncall: jit Add this issue/PR to JIT oncall triage queue

Comments

@caxelos
Copy link

caxelos commented Jun 30, 2019

🐛 CMake error: conversion from ‘torch::jit::script::Module’ to non-scalar type ‘std::shared_ptrtorch::jit::script::Module .

I am trying to use the torch::jit::load() function,in order to embed my pretrained model
to a c++ project.I am doing exactly what is shown here:https://pytorch.org/tutorials/advanced/cpp_export.html#a-minimal-c-application.I have searched,but I couldn't find a solution here.

#include <iostream>
#include <memory>
#include "libtorch/include/torch/script.h" // One-stop header.

int main(int argc, const char* argv[]) {
  if (argc != 2) {
    std::cerr << "usage: example-app <path-to-exported-script-module>\n";
    return -1;
  }

  // Deserialize the ScriptModule from a file using torch::jit::load().
  // Here is the error!!
  std::shared_ptr<torch::jit::script::Module> module = torch::jit::load("model.pt");

  assert(module != nullptr);
  std::cout << "ok\n";
}

The compilation is done with the command

sudo cmake -DCMAKE_PREFIX_PATH=../libtorch .. && make

Environment

  • PyTorch version: 1.1.0

  • Is debug build: No

  • CUDA used to build PyTorch: 9.0.176

  • OS: Ubuntu 16.04.5 LTS

  • GCC version: (Ubuntu 5.4.0-6ubuntu1~16.04.11) 5.4.0 20160609

  • CMake version: version 3.5.1

  • Python version: 3.6

  • Is CUDA available: No

  • CUDA runtime version: 7.5.17

  • GPU models and configuration: Could not collect

  • Nvidia driver version: Could not collect

  • cuDNN version: Could not collect

Versions of relevant libraries:

  • [pip3] numpy==1.15.4
  • [pip3] numpydoc==0.8.0
  • [pip3] torch==1.1.0
  • [pip3] torchsummary==1.5.1
  • [pip3] torchvision==0.3.0
  • [conda] blas 1.0 mkl
  • [conda] mkl 2019.1 144
  • [conda] mkl-service 1.1.2 py36he904b0f_5
  • [conda] mkl_fft 1.0.10 py36ha843d7b_0
  • [conda] mkl_random 1.0.2 py36hd81dba3_0
  • [conda] torch 1.1.0
  • [conda] torchsummary 1.5.1
  • [conda] torchvision 0.3.0
@facebook-github-bot facebook-github-bot added the oncall: jit Add this issue/PR to JIT oncall triage queue label Jun 30, 2019
@zdevito
Copy link
Contributor

zdevito commented Jul 2, 2019

It looks like you are using a nightly build. We recently changed the output type of load. This should work:

 torch::jit::script::Module module = torch::jit::load("model.pt");

Tutorials/documentation are still for the 1.1 release. It will be updated before we release 1.2.

@caxelos
Copy link
Author

caxelos commented Jul 2, 2019

@zdevito Thanks,it worked.

  • But now there is a problem with the assert() function.It throws the following error :error: no match for ‘operator!=’ (operand types are ‘torch::jit::script::Module’ and ‘std::nullptr_t’)
  • How can I fix that bug in the pointer comparison?

@zdevito
Copy link
Contributor

zdevito commented Jul 2, 2019

That assert can just be removed. Module isn't a pointer anymore, it behaves like torch::Tensor, and load will never return an invalid module.

@hamidkhb
Copy link

hamidkhb commented Aug 1, 2019

@zdevito I did what you suggested but now I'm stuck here:

at::Tensor output = model[0].forward(inputs).toTensor();

I get Illegal instruction

void test(vector<module_type>& model){
    //pseudo input
    vector<torch::jit::IValue> inputs;
    inputs.push_back(torch::ones({1, 3, 224, 224}));
    Tensor output = model[0].forward(inputs).toTensor();
    cout << output << endl;
}

int main(int argc, const char *argv[]) {

    if (argc == 2){
        cout << argv[1] << endl;


    }
    else {
        cerr << "no path of model is given" << endl;
        return -1;
    }
    // test
    module_type module = torch::jit::load(argv[1]);
    vector<module_type> modul;
    modul.push_back(module);
    test(modul);

}

Sign up for free to join this conversation on GitHub. Already have an account? Sign in to comment
Labels
oncall: jit Add this issue/PR to JIT oncall triage queue
Projects
None yet
Development

No branches or pull requests

4 participants