Skip to content
New issue

Have a question about this project? Sign up for a free GitHub account to open an issue and contact its maintainers and the community.

By clicking “Sign up for GitHub”, you agree to our terms of service and privacy statement. We’ll occasionally send you account related emails.

Already on GitHub? Sign in to your account

[ML] add new trained_models/{model_id}/_infer endpoint for all supervised models and deprecate deployment infer api #86361

Merged
merged 15 commits into from May 5, 2022

Conversation

benwtrent
Copy link
Member

This commit adds a new _ml/trained_models/{model_id}/_infer API. This api works for both native NLP models and supervised models trained via Data Frame analytics.

The format of the API is the same as the old _ml/trained_models/{model_id}/deployment/_infer. Taking a docs and an inference_config parameter.

This PR also deprecates the old experimental _ml/trained_models/{model_id}/deployment/_infer API.

The biggest difference is that the response now nests all results under an "inference_results" object.

closes: #86032

…ised models and deprecate deployment infer api
@benwtrent benwtrent added >enhancement :ml Machine learning Team:Clients Meta label for clients team v8.3.0 labels May 2, 2022
@elasticmachine elasticmachine added the Team:ML Meta label for the ML team label May 2, 2022
@elasticmachine
Copy link
Collaborator

Pinging @elastic/clients-team (Team:Clients)

@elasticmachine
Copy link
Collaborator

Pinging @elastic/ml-core (Team:ML)

@elasticsearchmachine
Copy link
Collaborator

Hi @benwtrent, I've created a changelog YAML for you.

@benwtrent
Copy link
Member Author

Should wait until all the work with: elastic/ml-cpp#2258 is completed and pytorch tests unmuted. That way we can fully test functionality in upgrade tests and integration tests.

Comment on lines 35 to +36
public static final String NAME = "cluster:internal/xpack/ml/inference/infer";
public static final String EXTERNAL_NAME = "cluster:monitor/xpack/ml/inference/infer";
Copy link
Member Author

Choose a reason for hiding this comment

The reason will be displayed to describe this comment to others. Learn more.

We needed an external name for interaction at the REST layer and since inference needs to work during rolling upgrade, we cannot get rid of the old action name.

So, both point to effectively the same transport class.

@benwtrent benwtrent force-pushed the feature/ml-single-infer-endpoint branch from 90ae4b7 to 2fd8ac9 Compare May 2, 2022 20:13
Copy link
Contributor

@sethmlarson sethmlarson left a comment

Choose a reason for hiding this comment

The reason will be displayed to describe this comment to others. Learn more.

LGTM from an API perspective

@benwtrent
Copy link
Member Author

@elasticmachine update branch

Copy link
Member

@davidkyle davidkyle left a comment

Choose a reason for hiding this comment

The reason will be displayed to describe this comment to others. Learn more.

LGTM


[source,console]
--------------------------------------------------
POST _ml/trained_models/model2/_infer
Copy link
Member

Choose a reason for hiding this comment

The reason will be displayed to describe this comment to others. Learn more.

Can you add an example using the built in lang_ident model pls

@Override
public XContentBuilder toXContent(XContentBuilder builder, Params params) throws IOException {
builder.startObject();
builder.field("inference_results", inferenceResults.stream().map(InferenceResults::asMap).collect(Collectors.toList()));
Copy link
Member

Choose a reason for hiding this comment

The reason will be displayed to describe this comment to others. Learn more.

Alternatively inferenceResults which implement ToXContentFragment could be mapped to ToXContentObject

        static ToXContentObject toXContentObject(ToXContentFragment fragment) {
            return (builder, params) -> {
                builder.startObject();
                fragment.toXContent(builder, params);
                builder.endObject();
                return builder;
            };
        }

and

builder.field("inference_results", inferenceResults.stream().map(Response::toXContentObject).collect(Collectors.toList()));

@lcawl
Copy link
Contributor

lcawl commented May 5, 2022

If we're keeping both the old and new doc pages (with the old API marked as deprecated), shouldn't the same be done for the rest-api-spec/src/main/resources/rest-api-spec/api/ml.infer_trained_model.json? (i.e. leave the existing .json file and just add the "deprecated" property). That way when we get to the point of generating docs from those specs, it'll match.

@benwtrent
Copy link
Member Author

That way when we get to the point of generating docs from those specs, it'll match.

I don't know. We don't have a deprecated flag to allow multiple endpoints in the same doc, but we do have that for the spec. Regardless, these are still experimental APIs.

Sign up for free to join this conversation on GitHub. Already have an account? Sign in to comment
Labels
>enhancement :ml Machine learning Team:Clients Meta label for clients team Team:ML Meta label for the ML team v8.3.0
Projects
None yet
Development

Successfully merging this pull request may close these issues.

[ML] Extend the _infer endpoint to support all model types
6 participants