-
Notifications
You must be signed in to change notification settings - Fork 829
New issue
Have a question about this project? Sign up for a free GitHub account to open an issue and contact its maintainers and the community.
By clicking “Sign up for GitHub”, you agree to our terms of service and privacy statement. We’ll occasionally send you account related emails.
Already on GitHub? Sign in to your account
Open Inference Protocol with nightly build not working #2951
Comments
Hi @harshita-meena Do you mind trying this script We are running this nightly. |
Currently I am trying to setup the tests but they will probably not fail because the images used in oip kserve yamls are custom ones (http and grpc) and both do not refer to nightly ones though are part of the test_mnist.sh at lines 189 and 215. |
Still struggling to get the tests running, if you can approve the workflow for this PR. The primary reason I am trying to get this working is because I wanted to use Open Inference Protocol for a non-kserve deployment. Everything works till the worker dies after the post processing step. I was heavily relying on this because OIP provides a great generic metadata/inference API. If this doesn't work I will use the inference.proto instead. |
Hi @harshita-meena Thanks for the details. Checking with kserve regarding this. Will update |
You can reproduce the worker died issue, if you build the dockerfile part of this issue with config properties and create a new mnist.mar with a slightly modified handler for OIP specific requests. (attached a zip)
|
Hi @harshita-meena Thanks! This is a new feature and there might be bugs. Will update when I repro it |
Hi @agunapal I was wondering if you identified the reason for the issue or got a chance to discuss with kserve. |
I figured out the solution and will reply back with it soon. Thankyou! |
Hi @harshita-meena I was able to repro the issue with the steps you shared. Please feel free to send a PR if you have identified the problem |
It is how the OIP expects the response, I was sending only dict or only list but if I send list of dicts containing parameters specific to OIP response, it will give successful prediction.
|
Just saw your message @agunapal, if it helps I can submit a PR with only handler specific to OIP. |
Hi @harshita-meena The error you posted and the one I see is in pre-processing? So, how is it related to post-processing. Also, I'm wondering how do we address this backward compatibility breaking change for post-processing |
Apologies if my stream of errors confused you about the actual issue. My main goal is to get an inference working with gRPC using Open Inference Protocol in a basic deployment not using kserve. The pre-processing error is because I was using the old handler when i first started this issue that didn't extract the request following Open Inference Protocol but using the old inference.proto. The second error I posted is after I finally resolved pre-processing but I was not able to figure out the post processing step. Finally yesterday I was able to figure out how the post processing step should look. Overall if your question is how can we prevent the worker from crashing, it will be in the OIP server logic in GRPCJob.java. Posting my findings from yesterday
But if i send a response as a list of dictionary, the parsing logic goes through so OIP response can process it. [{"model_name":....."outputs":[{"name":"output-0","datatype":"INT64","shape":["1"],"data":[0]}] |
Thanks for the detailed findings. cc @lxning |
🐛 Describe the bug
While trying to run load tests with latest merged changes on v2 Open inference protocol, I noticed that the example for mnist does not work in preprocessing step. https://github.com/pytorch/serve/pull/2609/files
Error logs
The server side showed error
Installation instructions
copied model from gs://kfserving-examples/models/torchserve/image_classifier/v2/model-store/mnist.mar
Built the docker file using
docker build -f Dockerfile -t metadata .
and brought it up locallyRan ghz load test tool with
Model Packaing
Used an existing packaged model mnist.mar at gs://kfserving-examples/models/torchserve/image_classifier/v2
config.properties
inference_address=http://0.0.0.0:8080
management_address=http://0.0.0.0:8081
metrics_address=http://0.0.0.0:8082
enable_metrics_api=true
model_metrics_auto_detect=true
metrics_mode=prometheus
number_of_netty_threads=32
job_queue_size=1000
enable_envvars_config=true
model_store=/home/model-server/model-store
load_models=mnist.mar
workflow_store=/home/model-server/wf-store
Versions
Environment headers
Torchserve branch:
**Warning: torchserve not installed ..
torch-model-archiver==0.9.0
Python version: 3.7 (64-bit runtime)
Python executable: /Users/hmeena/development/ml-platform-control-planes/venv/bin/python
Versions of relevant python libraries:
numpy==1.21.6
requests==2.31.0
requests-oauthlib==1.3.1
torch-model-archiver==0.9.0
wheel==0.41.0
**Warning: torch not present ..
**Warning: torchtext not present ..
**Warning: torchvision not present ..
**Warning: torchaudio not present ..
Java Version:
OS: Mac OSX 11.7.8 (x86_64)
GCC version: N/A
Clang version: 12.0.0 (clang-1200.0.32.29)
CMake version: version 3.23.2
Versions of npm installed packages:
**Warning: newman, newman-reporter-html markdown-link-check not installed...
Repro instructions
same as installation instruction
Possible Solution
I am unsure of how well the OIP is working with Torchserve at the moment. I tried a small ranker example and it fails in the post processing step where the worker crashes completely, it is not able to send response as ModelInferResponse.
The text was updated successfully, but these errors were encountered: