Skip to content
Merged
Show file tree
Hide file tree
Changes from all commits
Commits
File filter

Filter by extension

Filter by extension

Conversations
Failed to load comments.
Loading
Jump to
Jump to file
Failed to load files.
Loading
Diff view
Diff view
Loading
Sorry, something went wrong. Reload?
Sorry, we cannot display this file.
Sorry, this file is invalid so it cannot be displayed.
13 changes: 11 additions & 2 deletions docs/en/stack/ml/nlp/ml-nlp-deploy-models.asciidoc
Original file line number Diff line number Diff line change
Expand Up @@ -90,7 +90,6 @@ When you deploy the model, it is allocated to all available {ml} nodes. The
model is loaded into memory in a native process that encapsulates `libtorch`,
which is the underlying machine learning library of PyTorch.

//TBD: Are these threading options available in the script and in Kibana?
You can optionally specify the number of CPU cores it has access to on each node.
If you choose to optimize for latency (that is to say, inference should return
as fast as possible), you can increase `inference_threads` to lower latencies.
Expand All @@ -114,7 +113,17 @@ perform inference. _{infer-cap}_ is a {ml} feature that enables you to use your
trained models to perform NLP tasks (such as text extraction, classification, or
embeddings) on incoming data.

The simplest method to test your model against new data is to use the
The simplest method to test your model against new data is to use the
*Test model* action in {kib}:

[role="screenshot"]
image::images/ml-nlp-test-lang-ident.png[Testing a French phrase against the language identification trained model in the *{ml}* app]

NOTE: This {kib} functionality is currently available only for the
`lang_ident_model_1` model and for supported
<<ml-nlp-model-ref-ner,third party named entity recognition models>>.

Alternatively, you can use the
{ref}/infer-trained-model-deployment.html[infer trained model deployment API].
For example, to try a named entity recognition task, provide some sample text:

Expand Down