-
Notifications
You must be signed in to change notification settings - Fork 24.3k
New issue
Have a question about this project? Sign up for a free GitHub account to open an issue and contact its maintainers and the community.
By clicking “Sign up for GitHub”, you agree to our terms of service and privacy statement. We’ll occasionally send you account related emails.
Already on GitHub? Sign in to your account
[ML] Include model definition install status for Pytorch models #95271
Conversation
Documentation preview: |
Pinging @elastic/ml-core (Team:ML) |
Hi @davidkyle, I've created a changelog YAML for you. |
docs/reference/ml/trained-models/apis/get-trained-models.asciidoc
Outdated
Show resolved
Hide resolved
There was a problem hiding this comment.
Choose a reason for hiding this comment
The reason will be displayed to describe this comment to others. Learn more.
LGTM if you could just fix the incomplete sentence in the docs before merging.
Co-authored-by: David Roberts <dave.roberts@elastic.co>
I wasn't happy with I've changed the request parameter from |
@elasticmachine update branch |
…tic#95271) Adds a new include flag definition_status to the GET trained models API. When present the trained model configuration returned in the response will have the new boolean field fully_defined if the full model definition is exists.
Add a new include flag
definition_status
to the GET trained models API.When the flag is present the trained model configuration returned in the response will have the new boolean field
fully_defined
if the full model definition is present.The API is not compatible with wildcards or multiple model Ids.
model_id
must map to a single model withmodel_type == PYTORCH