Skip to content

Commit df0519c

Browse files
authored
[DOCS] Testing trained models in UI (#2119)
1 parent ff07d25 commit df0519c

File tree

2 files changed

+11
-2
lines changed

2 files changed

+11
-2
lines changed
201 KB
Loading

docs/en/stack/ml/nlp/ml-nlp-deploy-models.asciidoc

Lines changed: 11 additions & 2 deletions
Original file line numberDiff line numberDiff line change
@@ -90,7 +90,6 @@ When you deploy the model, it is allocated to all available {ml} nodes. The
9090
model is loaded into memory in a native process that encapsulates `libtorch`,
9191
which is the underlying machine learning library of PyTorch.
9292

93-
//TBD: Are these threading options available in the script and in Kibana?
9493
You can optionally specify the number of CPU cores it has access to on each node.
9594
If you choose to optimize for latency (that is to say, inference should return
9695
as fast as possible), you can increase `inference_threads` to lower latencies.
@@ -114,7 +113,17 @@ perform inference. _{infer-cap}_ is a {ml} feature that enables you to use your
114113
trained models to perform NLP tasks (such as text extraction, classification, or
115114
embeddings) on incoming data.
116115

117-
The simplest method to test your model against new data is to use the
116+
The simplest method to test your model against new data is to use the
117+
*Test model* action in {kib}:
118+
119+
[role="screenshot"]
120+
image::images/ml-nlp-test-lang-ident.png[Testing a French phrase against the language identification trained model in the *{ml}* app]
121+
122+
NOTE: This {kib} functionality is currently available only for the
123+
`lang_ident_model_1` model and for supported
124+
<<ml-nlp-model-ref-ner,third party named entity recognition models>>.
125+
126+
Alternatively, you can use the
118127
{ref}/infer-trained-model-deployment.html[infer trained model deployment API].
119128
For example, to try a named entity recognition task, provide some sample text:
120129

0 commit comments

Comments
 (0)