New issue
Have a question about this project? Sign up for a free GitHub account to open an issue and contact its maintainers and the community.
By clicking “Sign up for GitHub”, you agree to our terms of service and privacy statement. We’ll occasionally send you account related emails.
Already on GitHub? Sign in to your account
[torchscript] RuntimeError: 'module' object is not subscriptable #101
Comments
@yarkable Could you please help me to solve the problem? Thank you very much! |
It seems that you have some problems while importing modules, maybe you can check if you have a |
This is the result of 'tree' command of the project root directory:
I cloned the whole project and downloaded the models to pretrained folder.
So I think the init.py is OK. |
By the way, I try to use the modnet.pt directly to build executable file, but fail to load the model . CMakeLists.txt
example-app.cpp:
build successfully but fail to execute ./example-app: command: ./example-app ../modnet.pt Could you please tell us the environment required and how to use the modnet.pt in cpp? |
Hi, I just clone the project and download official pretrained model, then I type the command
It works successfully, maybe you should clone it once again and see whether the error still occurs? |
What is your torch version and libtorch version, please? |
Btw, here is my code for classification using C++. int main(int argc, char * argv[]){
if (argc != 2) {
std::cerr << "no module found !\n";
return -1;
}
torch::jit::script::Module module;
try {
module = torch::jit::load(argv[1]);
}
catch (const c10::Error& e){
std::cerr << "error loading the module\n";
return -1;
}
std::cout << "ok!\n";
/////////////////////////////////////////////////
string path = "/pytorch-deployment/assets/3.jpg";
Mat img = imread(path), img_float;
cvtColor(img, img, CV_BGR2RGB);
bitwise_not(img, img);
vector<Mat> mv;
split(img, mv);
img = mv[1];
img.convertTo(img_float, CV_32F, 1.0 / 255);
resize(img_float, img_float , Size(28, 28));
auto img_tensor = torch::from_blob(img_float.data, {1, 28, 28, 1}, at::kFloat).permute({ 0,3,1,2 });
auto img_var = torch::autograd::make_variable(img_tensor, false);
// Create a vector of inputs.
std::vector<torch::jit::IValue> inputs;
inputs.push_back(img_var);
auto output = module.forward(inputs).toTensor();
cout << output << endl;
auto index = output.argmax(1);
cout << "The predicted class is : " << index << endl;
} You can re-clone the project and generate a TorchScript model to see if your code is correct. |
torch 1.6.0 and I don't have a libtorch |
I tried again with torch verion 1.3.1(gpu), the same errors occured. Maybe I should install torch 1.6.0 and try again. |
@czHappy Lol, I used to use libtorch. But in this project, I just export it to TorchScript version and give it to the other engineer.🤣 |
@yarkable I use torch 1.6.0 (CPU) and modify your export script to produce a cpu torchscript model successfully. It proves that the required version (torch >= 1.2.0) is not accurate, I have tried torch 1.2.0, 1.3.1, 1.4.0 but all of them didn't work. Now I find torch 1.6.0 is OK. Then I use C++ to load the pt file and test forward function, it really works! Anyway, thanks for your excellent job and patient reply! The details are as follows: CMakeLists.txt cmake_minimum_required(VERSION 3.0 FATAL_ERROR)
project(example-app)
find_package(Torch REQUIRED)
set(CMAKE_CXX_FLAGS "${CMAKE_CXX_FLAGS} ${TORCH_CXX_FLAGS}")
add_executable(example-app example-app.cpp)
target_link_libraries(example-app "${TORCH_LIBRARIES}")
set_property(TARGET example-app PROPERTY CXX_STANDARD 14)
# The following code block is suggested to be used on Windows.
# According to https://github.com/pytorch/pytorch/issues/25457,
# the DLLs need to be copied to avoid memory errors.
if (MSVC)
file(GLOB TORCH_DLLS "${TORCH_INSTALL_PREFIX}/lib/*.dll")
add_custom_command(TARGET example-app
POST_BUILD
COMMAND ${CMAKE_COMMAND} -E copy_if_different
${TORCH_DLLS}
$<TARGET_FILE_DIR:example-app>)
endif (MSVC) example-app.cpp #include <torch/script.h> // One-stop header.
#include <vector>
#include <iostream>
#include <memory>
int main(int argc, const char* argv[]) {
if (argc != 2) {
std::cerr << "usage: example-app <path-to-exported-script-module>\n";
return -1;
}
torch::jit::script::Module module;
try {
// Deserialize the ScriptModule from a file using torch::jit::load().
module = torch::jit::load(argv[1]);
}
catch (const c10::Error& e) {
std::cerr << "error loading the model\n";
return -1;
}
std::cout << "ok\n";
// Create a vector of inputs.
std::vector<torch::jit::IValue> inputs;
inputs.push_back(torch::ones({1, 3, 640, 480}));
// Execute the model and turn its output into a tensor.
at::Tensor output = module.forward(inputs).toTensor();
//std::cout << output.slice(/*dim=*/1, /*start=*/0, /*end=*/5) << '\n';
std::cout<<output[0][0][0][0]<<std::endl;
}
Just like Minimal Example of official document, https://pytorch.org/cppdocs/installing.html |
Good job |
When I export the TorchScript version of MODNet, error occurs:
Environment: Centos 7 torch 1.3.1 gpu
The text was updated successfully, but these errors were encountered: