New issue
Have a question about this project? Sign up for a free GitHub account to open an issue and contact its maintainers and the community.
By clicking “Sign up for GitHub”, you agree to our terms of service and privacy statement. We’ll occasionally send you account related emails.
Already on GitHub? Sign in to your account
[BUG] mlflow models serve
fails with HTTP 500 instead of 400 on bad input
#4897
Comments
Hello @mmaitre314! I would like to work on this issue. Talking about the issue, it seems that we are incorrectly passing a value to the argument mlflow/mlflow/pyfunc/scoring_server/__init__.py Lines 80 to 89 in b945297
The function
In Lines 49 to 52 in b945297
The problem is that Lines 3 to 28 in b945297
There are 2 ways to fix this issue:
|
Hi @abatomunkuev ! Thank you for the detailed root cause analysis and willingness to contribute. We'd be very excited about your contribution for a fix; I agree that solution #1 is better. Please feel free to file a pull request, and let me know if you have any questions! |
Hello @dbczumar! I am currently have some issues reproducing the error. It seems to me that to start a server, I may need a ML model. Could you please guide me through how to properly serve the model. From model building to serving. I am trying to run this script:
I have gone through contribution guidelines, created conda environment and installed the dependencies. |
Hi @abatomunkuev , are you sure that MLflow and Pandas are installed in your conda environment? |
|
@abatomunkuev if you run “which python” and “which pip”, do the resulting locations reside within the expected conda environment? |
@dbczumar I got it working. I had to run the python script from conda environment and also install dependencies for conda env. |
@dbczumar I have created a Pull Request. However, when I was performing tests by running the following command
Some tests have failed since I have changed the code in Thank you. |
Closing now that #5003 has been merged. Thanks @abatomunkuev ! |
Thank you for submitting an issue. Please refer to our issue policy for additional information about bug reports. For help with debugging your code, please refer to Stack Overflow.
Please fill in this bug report template to ensure a timely and thorough response.
Willingness to contribute
The MLflow Community encourages bug fix contributions. Would you or another member of your organization be willing to contribute a fix for this bug to the MLflow code base?
System information
mlflow --version
): 1.20.2mlflow models serve -m runs:/5f8aee52fcb442388368af4da658b398/model --no-conda
curl -i -X POST -d "{\"data\":0.0199132142]}" -H "Content-Type: application/json" http://localhost:5000/invocations
Describe the problem
Describe the problem clearly here. Include descriptions of the expected behavior and the actual behavior.
Submitting an inference requests to the MLFlow model server with invalid content returns HTTP error 500 'Internal Server Error' instead of HTTP error 400 'Bad Request'. This prevents proper error handling on the client side and blocks REST API fuzzing.
Ex:
Code to reproduce issue
Provide a reproducible test case that is the bare minimum necessary to generate the problem.'
Other info / logs
Include any logs or source code that would be helpful to diagnose the problem. If including tracebacks, please include the full traceback. Large logs and files should be attached.
What component(s), interfaces, languages, and integrations does this bug affect?
Components
area/artifacts
: Artifact stores and artifact loggingarea/build
: Build and test infrastructure for MLflowarea/docs
: MLflow documentation pagesarea/examples
: Example codearea/model-registry
: Model Registry service, APIs, and the fluent client calls for Model Registryarea/models
: MLmodel format, model serialization/deserialization, flavorsarea/projects
: MLproject format, project running backendsarea/scoring
: MLflow Model server, model deployment tools, Spark UDFsarea/server-infra
: MLflow Tracking server backendarea/tracking
: Tracking Service, tracking client APIs, autologgingInterface
area/uiux
: Front-end, user experience, plotting, JavaScript, JavaScript dev serverarea/docker
: Docker use across MLflow's components, such as MLflow Projects and MLflow Modelsarea/sqlalchemy
: Use of SQLAlchemy in the Tracking Service or Model Registryarea/windows
: Windows supportLanguage
language/r
: R APIs and clientslanguage/java
: Java APIs and clientslanguage/new
: Proposals for new client languagesIntegrations
integrations/azure
: Azure and Azure ML integrationsintegrations/sagemaker
: SageMaker integrationsintegrations/databricks
: Databricks integrationsThe text was updated successfully, but these errors were encountered: