diff --git a/docs/en/stack/ml/nlp/images/ml-nlp-test-lang-ident.png b/docs/en/stack/ml/nlp/images/ml-nlp-test-lang-ident.png new file mode 100644 index 000000000..20d76a3aa Binary files /dev/null and b/docs/en/stack/ml/nlp/images/ml-nlp-test-lang-ident.png differ diff --git a/docs/en/stack/ml/nlp/ml-nlp-deploy-models.asciidoc b/docs/en/stack/ml/nlp/ml-nlp-deploy-models.asciidoc index 44c9baaad..5a4c6929c 100644 --- a/docs/en/stack/ml/nlp/ml-nlp-deploy-models.asciidoc +++ b/docs/en/stack/ml/nlp/ml-nlp-deploy-models.asciidoc @@ -90,7 +90,6 @@ When you deploy the model, it is allocated to all available {ml} nodes. The model is loaded into memory in a native process that encapsulates `libtorch`, which is the underlying machine learning library of PyTorch. -//TBD: Are these threading options available in the script and in Kibana? You can optionally specify the number of CPU cores it has access to on each node. If you choose to optimize for latency (that is to say, inference should return as fast as possible), you can increase `inference_threads` to lower latencies. @@ -114,7 +113,17 @@ perform inference. _{infer-cap}_ is a {ml} feature that enables you to use your trained models to perform NLP tasks (such as text extraction, classification, or embeddings) on incoming data. -The simplest method to test your model against new data is to use the +The simplest method to test your model against new data is to use the +*Test model* action in {kib}: + +[role="screenshot"] +image::images/ml-nlp-test-lang-ident.png[Testing a French phrase against the language identification trained model in the *{ml}* app] + +NOTE: This {kib} functionality is currently available only for the +`lang_ident_model_1` model and for supported +<>. + +Alternatively, you can use the {ref}/infer-trained-model-deployment.html[infer trained model deployment API]. For example, to try a named entity recognition task, provide some sample text: