Skip to content
New issue

Have a question about this project? Sign up for a free GitHub account to open an issue and contact its maintainers and the community.

By clicking “Sign up for GitHub”, you agree to our terms of service and privacy statement. We’ll occasionally send you account related emails.

Already on GitHub? Sign in to your account

Running inference with Pytorch backend on Jetson nano #4298

Closed
MhdKAT opened this issue Apr 27, 2022 · 0 comments
Closed

Running inference with Pytorch backend on Jetson nano #4298

MhdKAT opened this issue Apr 27, 2022 · 0 comments

Comments

@MhdKAT
Copy link

MhdKAT commented Apr 27, 2022

Description
A clear and concise description of what the bug is.
I am trying to run a simple inference using pytorch backend on Jetson nano. My triton installation works for all the other backends.
Except for pytorch which keeps on throwing the same error.

Triton Information
What version of Triton are you using?
2.19.0 for Jetson

Are you using the Triton container or did you build it yourself?
followed your installation steps in the docs
To Reproduce
Steps to reproduce the behavior.
I first converted simple model to torchscript as follows :
model = torch.hub.load('pytorch/vision:v0.10.0', 'vgg11', pretrained=True).cpu()
model.eval()
example = torch.rand(1, 3, 224, 224, )
traced_script_module = torch.jit.trace(model, example)
traced_script_module(example)
traced_script_module.save("vgg.pt")

then I followed the examples to send inference requests with gRPC :
input0.set_data_from_numpy(test_img)
output = tritongrpcclient.InferRequestedOutput(output_name)
response = triton_client.infer(model_name=model_name,
inputs=[input0], outputs=[output])
but i kept on having this error :

InferenceServerException: [StatusCode.INTERNAL] PyTorch execute failure: forward() Expected a value of type 'Tensor' for argument 'x' but instead found type 'NoneType'.
Position: 1
Declaration: forward(torch.torchvision.models.vgg.VGG self, Tensor x) -> (Tensor)
Exception raised from checkArg at /opt/package_build/pytorch/aten/src/ATen/core/function_schema_inl.h:186 (most recent call first):
frame #0: c10::detail::torchCheckFail(char const*, char const*, unsigned int, std::__cxx11::basic_string<char, std::char_traits, std::allocator > const&) + 0xd8 (0x7f6f903888 in /opt/tritonserver/backends/pytorch/libc10.so)
frame #1: + 0x17b57d0 (0x7f711767d0 in /opt/tritonserver/backends/pytorch/libtorch_cpu.so)
frame #2: + 0x17b5170 (0x7f71176170 in /opt/tritonserver/backends/pytorch/libtorch_cpu.so)
frame #3: torch::jit::Method::operator()(std::vector<c10::IValue, std::allocatorc10::IValue >, std::unordered_map<std::__cxx11::basic_string<char, std::char_traits, std::allocator >, c10::IValue, std::hash<std::__cxx11::basic_string<char, std::char_traits, std::allocator > >, std::equal_to<std::__cxx11::basic_string<char, std::char_traits, std::allocator > >, std::allocator<std::pair<std::__cxx11::basic_string<char, std::char_traits, std::allocator > const, c10::IValue> > > const&) const + 0x378 (0x7f73500708 in /opt/tritonserver/backends/pytorch/libtorch_cpu.so)
frame #4: + 0x1efd8 (0x7f874aefd8 in /opt/tritonserver/backends/pytorch/libtriton_pytorch.so)
frame #5: + 0x158fc (0x7f874a58fc in /opt/tritonserver/backends/pytorch/libtriton_pytorch.so)
frame #6: + 0x17e78 (0x7f874a7e78 in /opt/tritonserver/backends/pytorch/libtriton_pytorch.so)
frame #7: TRITONBACKEND_ModelInstanceExecute + 0x324 (0x7f874a8e64 in /opt/tritonserver/backends/pytorch/libtriton_pytorch.so)
frame #8: + 0x215c1c (0x7f87e8ec1c in /opt/tritonserver/bin/../lib/libtritonserver.so)
frame #9: + 0x21628c (0x7f87e8f28c in /opt/tritonserver/bin/../lib/libtritonserver.so)
frame #10: + 0xd8ae8 (0x7f87d51ae8 in /opt/tritonserver/bin/../lib/libtritonserver.so)
frame #11: + 0x211cf0 (0x7f87e8acf0 in /opt/tritonserver/bin/../lib/libtritonserver.so)
frame #12: + 0xbbe94 (0x7f8787de94 in /usr/lib/aarch64-linux-gnu/libstdc++.so.6)
frame #13: + 0x7088 (0x7f87c28088 in /lib/aarch64-linux-gnu/libpthread.so.0)

@MhdKAT MhdKAT closed this as completed Apr 28, 2022
Sign up for free to join this conversation on GitHub. Already have an account? Sign in to comment
Labels
None yet
Development

No branches or pull requests

1 participant