Skip to content
New issue

Have a question about this project? Sign up for a free GitHub account to open an issue and contact its maintainers and the community.

By clicking “Sign up for GitHub”, you agree to our terms of service and privacy statement. We’ll occasionally send you account related emails.

Already on GitHub? Sign in to your account

Prevent experimental features from being set by default setting multimodelserver flags to false #1375

Merged
merged 4 commits into from
Feb 12, 2021

Conversation

abchoo
Copy link
Contributor

@abchoo abchoo commented Feb 11, 2021

What this PR does / why we need it: By default the flags in inferenceservice.yaml for SKLearn, XGBoost, LightGMB, and Triton have been set to true which means MMS is enabled without users knowing. We want to prevent this from happening.

Which issue(s) this PR fixes (optional, in fixes #<issue number>(, fixes #<issue_number>, ...) format, will close the issue(s) when PR gets merged):
Fixes #

Special notes for your reviewer:

  1. Please confirm that if this PR changes any image versions, then that's the sole change this PR makes.

Release note:

Using MMS requires flags to be set in inferenceservice.yaml

@aws-kf-ci-bot
Copy link
Contributor

Hi @abchoo. Thanks for your PR.

I'm waiting for a kubeflow member to verify that this patch is reasonable to test. If it is, they should reply with /ok-to-test on its own line. Until that is done, I will not automatically test new commits in this PR, but the usual testing commands by org members will still work. Regular contributors should join the org to skip this step.

Once the patch is verified, the new status will be reflected by the ok-to-test label.

I understand the commands that are listed here.

Instructions for interacting with me using PR comments are available here. If you have questions or suggestions related to my behavior, please file an issue against the kubernetes/test-infra repository.

@abchoo
Copy link
Contributor Author

abchoo commented Feb 11, 2021

/assign @yuzisun @yuzliu

@yuzliu
Copy link
Contributor

yuzliu commented Feb 11, 2021

/LGTM

@yuzisun
Copy link
Member

yuzisun commented Feb 11, 2021

@abchoo what happens when user submits trained model when the flag is off?

@yuzisun
Copy link
Member

yuzisun commented Feb 11, 2021

@abchoo Can you help regenerate the 0.5 release yaml by running hack/generate-install.sh v0.5.0 ?

@yuzliu
Copy link
Contributor

yuzliu commented Feb 11, 2021

@abchoo Can you help regenerate the 0.5 release yaml by running hack/generate-install.sh v0.5.0 ?

@yuzisun Should we cut release after the validation webhook is in?

@yuzisun
Copy link
Member

yuzisun commented Feb 11, 2021

/ok-to-test

@abchoo
Copy link
Contributor Author

abchoo commented Feb 11, 2021

/retest

@abchoo
Copy link
Contributor Author

abchoo commented Feb 11, 2021

/retest

@yuzisun
Copy link
Member

yuzisun commented Feb 12, 2021

Thanks @abchoo!

/lgtm
/approve

@k8s-ci-robot
Copy link
Contributor

[APPROVALNOTIFIER] This PR is APPROVED

This pull-request has been approved by: abchoo, yuzisun, yuzliu

The full list of commands accepted by this bot can be found here.

The pull request process is described here

Needs approval from an approver in each of these files:

Approvers can indicate their approval by writing /approve in a comment
Approvers can cancel approval by writing /approve cancel in a comment

Sign up for free to join this conversation on GitHub. Already have an account? Sign in to comment
Projects
None yet
Development

Successfully merging this pull request may close these issues.

None yet

5 participants