-
Notifications
You must be signed in to change notification settings - Fork 1.4k
New issue
Have a question about this project? Sign up for a free GitHub account to open an issue and contact its maintainers and the community.
By clicking “Sign up for GitHub”, you agree to our terms of service and privacy statement. We’ll occasionally send you account related emails.
Already on GitHub? Sign in to your account
About PyTorch execute failure: forward() is missing value for argument 'input'. error #3633
Comments
@lincong8722 it looks like the pytorch model expects 2 inputs but only 1 is given. Can you share the model config for the pytorch torchscript model? |
@CoderHam Hello,My .pt model prints like this:
|
@CoderHam I print model.code
There are two inputs when the model is deployed
This model does have two inputs,But I only have one parameter for the input model on the model inference,not use text_features,like this
What should I do? |
@lincong8722 you should reach out to the people who created the model as they will be able to best explain the same. This is not a Triton issue but rather a pytorch framework issue. Closing this ticket as such. |
When I start the model successfully, pytorch execute failure: forward() is missing value for argument 'input' appears in the test phase of the client. The following is my esemble model. How can I solve it?
Traceback (most recent call last):
File "test.py", line 151, in
responses.append(async_request.get_result())
File "/usr/local/lib/python3.8/dist-packages/tritonclient/http/init.py", line 1474, in get_result
_raise_if_error(response)
File "/usr/local/lib/python3.8/dist-packages/tritonclient/http/init.py", line 64, in _raise_if_error
raise error
tritonclient.utils.InferenceServerException: in ensemble 'pl_video_tag', PyTorch execute failure: forward() is missing value for argument 'input'. Declaration: forward(torch.multimodal.model.multimodal_transformer.___torch_mangle_9591.Multimodal self, Tensor image, Tensor input) -> ((Tensor, Tensor))
Exception raised from checkAndNormalizeInputs at /opt/pytorch/pytorch/aten/src/ATen/core/function_schema_inl.h:234 (most recent call first):
frame #0: c10::Error::Error(c10::SourceLocation, std::__cxx11::basic_string<char, std::char_traits, std::allocator >) + 0x6c (0x7fdd8470c63c in /opt/tritonserver/backends/pytorch/libc10.so)
frame #1: c10::detail::torchCheckFail(char const*, char const*, unsigned int, std::__cxx11::basic_string<char, std::char_traits, std::allocator > const&) + 0xfa (0x7fdd846d7a28 in /opt/tritonserver/backends/pytorch/libc10.so)
frame #2: + 0x12401ed (0x7fdd474a01ed in /opt/tritonserver/backends/pytorch/libtorch_cpu.so)
frame #3: torch::jit::GraphFunction::operator()(std::vector<c10::IValue, std::allocatorc10::IValue >, std::unordered_map<std::__cxx11::basic_string<char, std::char_traits, std::allocator >, c10::IValue, std::hash<std::__cxx11::basic_string<char, std::char_traits, std::allocator > >, std::equal_to<std::__cxx11::basic_string<char, std::char_traits, std::allocator > >, std::allocator<std::pair<std::__cxx11::basic_string<char, std::char_traits, std::allocator > const, c10::IValue> > > const&) + 0x31 (0x7fdd4a04d791 in /opt/tritonserver/backends/pytorch/libtorch_cpu.so)
frame #4: torch::jit::Method::operator()(std::vector<c10::IValue, std::allocatorc10::IValue >, std::unordered_map<std::__cxx11::basic_string<char, std::char_traits, std::allocator >, c10::IValue, std::hash<std::__cxx11::basic_string<char, std::char_traits, std::allocator > >, std::equal_to<std::__cxx11::basic_string<char, std::char_traits, std::allocator > >, std::allocator<std::pair<std::__cxx11::basic_string<char, std::char_traits, std::allocator > const, c10::IValue> > > const&) + 0x168 (0x7fdd4a060538 in /opt/tritonserver/backends/pytorch/libtorch_cpu.so)
frame #5: + 0x18ad8 (0x7fdd84ffcad8 in /opt/tritonserver/backends/pytorch/libtriton_pytorch.so)
frame #6: + 0x1efd2 (0x7fdd85002fd2 in /opt/tritonserver/backends/pytorch/libtriton_pytorch.so)
frame #7: TRITONBACKEND_ModelInstanceExecute + 0x38a (0x7fdd8500466a in /opt/tritonserver/backends/pytorch/libtriton_pytorch.so)
frame #8: + 0x30c859 (0x7fddc48c7859 in /opt/tritonserver/bin/../lib/libtritonserver.so)
frame #9: + 0x109ec0 (0x7fddc46c4ec0 in /opt/tritonserver/bin/../lib/libtritonserver.so)
frame #10: + 0xd6de4 (0x7fddc4110de4 in /lib/x86_64-linux-gnu/libstdc++.so.6)
frame #11: + 0x9609 (0x7fddc458e609 in /lib/x86_64-linux-gnu/libpthread.so.0)
frame #12: clone + 0x43 (0x7fddc3dfe293 in /lib/x86_64-linux-gnu/libc.so.6)
——————————————————————————————————————————————————
The text was updated successfully, but these errors were encountered: