-
Notifications
You must be signed in to change notification settings - Fork 25.6k
[ML] Pass inference timeout to start deployment #116725
New issue
Have a question about this project? Sign up for a free GitHub account to open an issue and contact its maintainers and the community.
By clicking “Sign up for GitHub”, you agree to our terms of service and privacy statement. We’ll occasionally send you account related emails.
Already on GitHub? Sign in to your account
Merged
Merged
Conversation
This file contains hidden or bidirectional Unicode text that may be interpreted or compiled differently than what appears below. To review, open the file in an editor that reveals hidden Unicode characters.
Learn more about bidirectional Unicode characters
|
Pinging @elastic/ml-core (Team:ML) |
davidkyle
commented
Nov 13, 2024
| * @param modelVariant The configuration of the model variant to be downloaded | ||
| * @param listener The listener | ||
| */ | ||
| default void putModel(Model modelVariant, ActionListener<Boolean> listener) { |
There was a problem hiding this comment.
Choose a reason for hiding this comment
The reason will be displayed to describe this comment to others. Learn more.
This method is only used by the ElasticsearchInternalService and does not need to be part of the InferenceService interface
prwhelan
approved these changes
Nov 13, 2024
💚 Backport successful
|
davidkyle
added a commit
to davidkyle/elasticsearch
that referenced
this pull request
Nov 13, 2024
Default inference endpoints automatically deploy the model on inference the inference timeout is now passed to start model deployment so users can control that timeout
davidkyle
added a commit
to davidkyle/elasticsearch
that referenced
this pull request
Nov 13, 2024
…)" This reverts commit 59602a9.
smalyshev
pushed a commit
to smalyshev/elasticsearch
that referenced
this pull request
Nov 13, 2024
Default inference endpoints automatically deploy the model on inference the inference timeout is now passed to start model deployment so users can control that timeout
afoucret
pushed a commit
to afoucret/elasticsearch
that referenced
this pull request
Nov 14, 2024
Default inference endpoints automatically deploy the model on inference the inference timeout is now passed to start model deployment so users can control that timeout
elasticsearchmachine
pushed a commit
that referenced
this pull request
Nov 14, 2024
) * [ML] Pass inference timeout to start deployment (#116725) Default inference endpoints automatically deploy the model on inference the inference timeout is now passed to start model deployment so users can control that timeout * handle max time --------- Co-authored-by: Elastic Machine <elasticmachine@users.noreply.github.com>
alexey-ivanov-es
pushed a commit
to alexey-ivanov-es/elasticsearch
that referenced
this pull request
Nov 28, 2024
Default inference endpoints automatically deploy the model on inference the inference timeout is now passed to start model deployment so users can control that timeout
Sign up for free
to join this conversation on GitHub.
Already have an account?
Sign in to comment
Labels
auto-backport
Automatically create backport pull requests when merged
:ml
Machine learning
>non-issue
Team:ML
Meta label for the ML team
v8.17.0
v9.0.0
Add this suggestion to a batch that can be applied as a single commit.
This suggestion is invalid because no changes were made to the code.
Suggestions cannot be applied while the pull request is closed.
Suggestions cannot be applied while viewing a subset of changes.
Only one suggestion per line can be applied in a batch.
Add this suggestion to a batch that can be applied as a single commit.
Applying suggestions on deleted lines is not supported.
You must change the existing code in this line in order to create a valid suggestion.
Outdated suggestions cannot be applied.
This suggestion has been applied or marked resolved.
Suggestions cannot be applied from pending reviews.
Suggestions cannot be applied on multi-line comments.
Suggestions cannot be applied while the pull request is queued to merge.
Suggestion cannot be applied right now. Please check back later.
Default inference endpoints automatically deploy the model on inference. If the model download has been started then deploying the model should wait for the download to complete. This happens but with a default timeout of 30 seconds which in some cases is not long enough.
The timeout parameter from the inference request is now used in the start deployment request. Semantic text sets the timeout to a large value so it will not timeout, clients can control this with the inference timeout parameter
Non issue as the code is behind a feature flag