@@ -90,7 +90,6 @@ When you deploy the model, it is allocated to all available {ml} nodes. The
9090model is loaded into memory in a native process that encapsulates `libtorch`,
9191which is the underlying machine learning library of PyTorch.
9292
93- //TBD: Are these threading options available in the script and in Kibana?
9493You can optionally specify the number of CPU cores it has access to on each node.
9594If you choose to optimize for latency (that is to say, inference should return
9695as fast as possible), you can increase `inference_threads` to lower latencies.
@@ -114,7 +113,17 @@ perform inference. _{infer-cap}_ is a {ml} feature that enables you to use your
114113trained models to perform NLP tasks (such as text extraction, classification, or
115114embeddings) on incoming data.
116115
117- The simplest method to test your model against new data is to use the
116+ The simplest method to test your model against new data is to use the
117+ *Test model* action in {kib}:
118+
119+ [role="screenshot"]
120+ image::images/ml-nlp-test-lang-ident.png[Testing a French phrase against the language identification trained model in the *{ml}* app]
121+
122+ NOTE: This {kib} functionality is currently available only for the
123+ `lang_ident_model_1` model and for supported
124+ <<ml-nlp-model-ref-ner,third party named entity recognition models>>.
125+
126+ Alternatively, you can use the
118127{ref}/infer-trained-model-deployment.html[infer trained model deployment API].
119128For example, to try a named entity recognition task, provide some sample text:
120129
0 commit comments