New issue
Have a question about this project? Sign up for a free GitHub account to open an issue and contact its maintainers and the community.
By clicking “Sign up for GitHub”, you agree to our terms of service and privacy statement. We’ll occasionally send you account related emails.
Already on GitHub? Sign in to your account
Add support for MLServer in the SKLearn predictor #1155
Conversation
Hi @adriangonz. Thanks for your PR. I'm waiting for a kubeflow member to verify that this patch is reasonable to test. If it is, they should reply with Once the patch is verified, the new status will be reflected by the I understand the commands that are listed here. Instructions for interacting with me using PR comments are available here. If you have questions or suggestions related to my behavior, please file an issue against the kubernetes/test-infra repository. |
if extensions.ContainerConcurrency != nil { | ||
arguments = append(arguments, fmt.Sprintf("%s=%s", constants.ArgumentWorkers, strconv.FormatInt(*extensions.ContainerConcurrency, 10))) | ||
} | ||
k.Container.Env = append( |
There was a problem hiding this comment.
Choose a reason for hiding this comment
The reason will be displayed to describe this comment to others. Learn more.
Does MLServer support command arguments ? If we can use the same set of the arguments that helps maintain the backwards compatibility, otherwise we can do a if/else check based on the image name.
"image": "gcr.io/kfserving/sklearnserver", | ||
"defaultImageVersion": "v0.4.1", | ||
"image": "docker.io/seldonio/mlserver", | ||
"defaultImageVersion": "0.1.2", |
There was a problem hiding this comment.
Choose a reason for hiding this comment
The reason will be displayed to describe this comment to others. Learn more.
Let's still keep the default to old v1 version for a few releases and then we can start defaulting to mlserver once they migrate away from v1. With v1beta1 user can specify the image and version on the inference service spec itself if they want to use mlserver.
thanks @adriangonz, This is a great start!
Does mlserver model repo works the same way as the current kfserver ?
|
Thanks for your comments @yuzisun! To make it more explicit for the end user, do you think it would make sense to have a |
I think that's good idea, we make it more explicit on the spec and user knows which protocol model server supports.
|
Triton only supports KFServing V2 protocol. V1 support was dropped a couple of months ago.
From: Dan Sun <notifications@github.com>
Sent: Monday, October 26, 2020 6:33 AM
To: kubeflow/kfserving <kfserving@noreply.github.com>
Cc: David Goodwin <DAVIDG@nvidia.com>; Mention <mention@noreply.github.com>
Subject: Re: [kubeflow/kfserving] Add support for MLServer in the SKLearn predictor (#1155)
Thanks for your comments @yuzisun<https://github.com/yuzisun>! To make it more explicit for the end user, do you think it would make sense to have a protocol flag in PredictorExtensionSpec? We could potentially leverage the same flag in Triton to enable / disable the V2 protocol (although, I'm not sure if Triton allows that in recent images).
I think that's good idea, we make it more explicit on the spec and user knows which protocol model server supports.
* tensorflow defaults to v1, no v2 support
* sklearn/xgboost defaults to v1 for now but user can specify protocol: v2 to use mlserver
* triton defaults to v2 @deadeyegoodwin<https://github.com/deadeyegoodwin> triton no longer supports v1 correct?
—
You are receiving this because you were mentioned.
Reply to this email directly, view it on GitHub<#1155 (comment)>, or unsubscribe<https://github.com/notifications/unsubscribe-auth/ABG6GZEUZPDF3MTBOHSC2DTSMV3A7ANCNFSM4S4SPQUQ>.
|
/ok-to-test |
/retest |
1 similar comment
/retest |
Hey @yuzisun , I just amended the overlay to add the new image. I've also added a new integration test for the SKLearn predictor with the V2 protocol. |
/retest |
This is really awsome work! thanks @adriangonz ! @cliveseldon If this looks good to you, can you approve? |
Indeed. Great addition. /approve |
/approve |
[APPROVALNOTIFIER] This PR is APPROVED This pull-request has been approved by: adriangonz, cliveseldon The full list of commands accepted by this bot can be found here. The pull request process is described here
Needs approval from an approver in each of these files:
Approvers can indicate their approval by writing |
What this PR does / why we need it:
Adds support for MLServer (and the V2 dataplane) in the
v1beta1
version of the SKLearn predictor. Note that this PR will still default allv1beta1
SKLearnInferenceServices
to use the V1 dataplane. To enable the V2 protocol, there is a newprotocolVersion
field in the predictor spec which can be set tov1
.This PR also adds an example worth checking under
./docs/samples/v1beta1/sklearn
showcasing how the integration works.Which issue(s) this PR fixes (optional, in
fixes #<issue number>(, fixes #<issue_number>, ...)
format, will close the issue(s) when PR gets merged):Works towards #1111 (it doesn't fix it yet, as it still needs support for XGBoost).
Special notes for your reviewer:
The aim of this PR is to kickstart the discussion on how do we want to shape the integration between KFServing and MLServer. This has already surfaced some questions, e.g.:
Do we want to maintain backwards compatibility in the data plane with previous versions?Yes, for now.mlserver
predictor which supports multiple frameworks?As such, it would be great to get people's thoughts on this proposal and which aspects would they change.
That's also why it only focuses on SKLearn for now. Once we are happy with how it looks, it should pretty straightforward to extend to XGBoost.
Release note: