Skip to content
New issue

Have a question about this project? Sign up for a free GitHub account to open an issue and contact its maintainers and the community.

By clicking “Sign up for GitHub”, you agree to our terms of service and privacy statement. We’ll occasionally send you account related emails.

Already on GitHub? Sign in to your account

[Enhancement] Auto-deploy ML Model when predict #1148

Closed
Zhangxunmt opened this issue Jul 20, 2023 · 4 comments
Closed

[Enhancement] Auto-deploy ML Model when predict #1148

Zhangxunmt opened this issue Jul 20, 2023 · 4 comments
Assignees
Labels
enhancement New feature or request v2.13.0 Issues targeting release v2.13.0

Comments

@Zhangxunmt
Copy link
Collaborator

Currently the ML models are manually "deployed" or "loaded" into the memory which requires customers to manually invoke a "deploy" API before using any ML models. Also after usage, ml-common requires a manual "undeploy" or "unload" from end users. This is adding more overhead to the system and end users to use ml-common in the workflow.

We should build a auto-deploy mechanism to get rid of these "deploy" and "undeploy" operations in the workflow. Instead, we should auto deploy the model when customers use a model in the first time and setup a TTL to auto-undeploy from the system. In this way, the deploy and un-deploy APIs can be removed from the workflow and user experience are much simplified.

@Zhangxunmt Zhangxunmt added bug Something isn't working enhancement New feature or request labels Jul 20, 2023
@Zhangxunmt Zhangxunmt self-assigned this Jul 20, 2023
@hijakk
Copy link

hijakk commented Jul 29, 2023

+1, automating management of availability of specific models would simplify operations significantly

@ylwu-amzn ylwu-amzn removed the bug Something isn't working label Aug 2, 2023
@Zhangxunmt Zhangxunmt changed the title [Improvement] Auto-deploy ML Model with TTL in the memory [Enhancement] Auto-deploy ML Model with TTL in the memory Aug 25, 2023
@ylwu-amzn ylwu-amzn added the v2.13.0 Issues targeting release v2.13.0 label Feb 21, 2024
@owaiskazi19
Copy link
Member

@Zhangxunmt thanks for the proposal. This looks like a much asked feature. Couple of questions around the same:

  1. How does the API experience will look like for the auto deploy model? Would we have a param when registering a model something like _register?auto_deploy=true?

  2. Will we still support deploy API for users to deploy the model manually or will we deprecate the API?
    As part of automating the setup of ml-commons we support DeployModelStep and UndeployModelSetp in flow framework. We might need to deprecate them here as well.

@Zhangxunmt
Copy link
Collaborator Author

@owaiskazi19 , the BWC is still valid. Nothing is changed from your side. You can still setup deploy and undeploy in the flow frameworks. The API experience remain the same too. The model registration of "_register?auto_deploy=true" is still valid.

This change only handles the case when a cluster scale up and down, restart, or node replacement, etc. We need to auto-deploy the models in the "Prediction" stage so customers don't need to keep manually deploying again and again after each event.

@ylwu-amzn ylwu-amzn changed the title [Enhancement] Auto-deploy ML Model with TTL in the memory [Enhancement] Auto-deploy ML Model when predict Mar 19, 2024
@ylwu-amzn ylwu-amzn added this to 2.13.0 (Release window opens March 19 2024 closes April 02 2024 ) in OpenSearch Project Roadmap Mar 19, 2024
@jngz-es
Copy link
Collaborator

jngz-es commented Apr 1, 2024

#2206

Sign up for free to join this conversation on GitHub. Already have an account? Sign in to comment
Labels
enhancement New feature or request v2.13.0 Issues targeting release v2.13.0
Projects
OpenSearch Project Roadmap
2.13.0 (Launched )
Status: Released
Development

No branches or pull requests

5 participants