Skip to content
New issue

Have a question about this project? Sign up for a free GitHub account to open an issue and contact its maintainers and the community.

By clicking “Sign up for GitHub”, you agree to our terms of service and privacy statement. We’ll occasionally send you account related emails.

Already on GitHub? Sign in to your account

[ML] Hide inference stats for PyTorch models #160599

Merged
merged 5 commits into from Jun 28, 2023

Conversation

darnautov
Copy link
Contributor

@darnautov darnautov commented Jun 27, 2023

Summary

Resolves #157385

Hides inference stats for the PyTorch models.

  • The salient information (inference_count, timestamp) is a repeat of what is already displayed in the Deployment Stats section.
  • missing_all_fields_count is confusing as the PyTorch models take a single input field rather than multiple fields as DFA models do, hence omitted.
  • The deployment stats have an error_count field, hence it has been added to the Deployment Stats and failure_count has been removed.
  • Displays the stats tab by default for expanded rows if the model has started deployments

@darnautov darnautov self-assigned this Jun 27, 2023
@darnautov darnautov requested a review from a team as a code owner June 27, 2023 10:04
@elasticmachine
Copy link
Contributor

Pinging @elastic/ml-ui (:ml)

]);

const initialSelectedTab =
item.state === 'started' ? tabs.find((t) => t.id === 'stats') : tabs[0];
Copy link
Member

Choose a reason for hiding this comment

The reason will be displayed to describe this comment to others. Learn more.

Suggested change
item.state === 'started' ? tabs.find((t) => t.id === 'stats') : tabs[0];
item.state === DEPLOYMENT_STATE.STARTED ? tabs.find((t) => t.id === 'stats') : tabs[0];

Copy link
Contributor Author

Choose a reason for hiding this comment

The reason will be displayed to describe this comment to others. Learn more.

@jgowdyelastic I wonder if it is actually worth an extra import? It's type-safe already and in case we want to rename it one day, it's going to be easy because the value is derived from a type definition.

Copy link
Member

Choose a reason for hiding this comment

The reason will be displayed to describe this comment to others. Learn more.

I think the types aren't quite correct for this, when I opened it in an editor item.state had an any type on this line.
But yeah, if the state type can be corrected, not adding the constant to the file makes sense.

Copy link
Contributor Author

Choose a reason for hiding this comment

The reason will be displayed to describe this comment to others. Learn more.

just checked and the issue is the MODEL_STATE dictionary that contains i18n strings. I guess I need to separate string literals and the actual translation.

Copy link
Contributor Author

Choose a reason for hiding this comment

The reason will be displayed to describe this comment to others. Learn more.

@jgowdyelastic fixed it by making it a regular string literal. There is no need for translation because other deployment state values come from the ML backend and are not translated at the moment.

image

Copy link
Contributor

@peteharverson peteharverson left a comment

Choose a reason for hiding this comment

The reason will be displayed to describe this comment to others. Learn more.

Tested and LGTM

Copy link
Member

@jgowdyelastic jgowdyelastic left a comment

Choose a reason for hiding this comment

The reason will be displayed to describe this comment to others. Learn more.

LGTM

@darnautov darnautov enabled auto-merge (squash) June 28, 2023 11:52
@kibana-ci
Copy link
Collaborator

💚 Build Succeeded

Metrics [docs]

Async chunks

Total size of all lazy-loaded chunks that will be downloaded as the user navigates the app

id before after diff
enterpriseSearch 2.5MB 2.5MB -212.0B
ml 3.4MB 3.4MB +183.0B
total -29.0B
Unknown metric groups

ESLint disabled line counts

id before after diff
enterpriseSearch 14 16 +2
securitySolution 413 417 +4
total +6

Total ESLint disabled count

id before after diff
enterpriseSearch 15 17 +2
securitySolution 492 496 +4
total +6

History

To update your PR or re-run it, just comment with:
@elasticmachine merge upstream

cc @darnautov

@darnautov darnautov merged commit 4064e2b into elastic:main Jun 28, 2023
20 checks passed
@kibanamachine
Copy link
Contributor

💔 All backports failed

Status Branch Result
8.9 Backport failed because of merge conflicts

Manual backport

To create the backport manually run:

node scripts/backport --pr 160599

Questions ?

Please refer to the Backport tool documentation

@darnautov darnautov deleted the ml-157385-hide-inference-stats branch June 28, 2023 13:31
rshen91 pushed a commit that referenced this pull request Jun 28, 2023
## Summary

Resolves #157385

Hides inference stats for the PyTorch models. 

- The salient information (`inference_count`, `timestamp`) is a repeat
of what is already displayed in the Deployment Stats section.
- `missing_all_fields_count` is confusing as the PyTorch models take a
single input field rather than multiple fields as DFA models do, hence
omitted.
- The deployment stats have an
[error_count](https://www.elastic.co/guide/en/elasticsearch/reference/current/get-trained-models-stats.html)
field, hence it has been added to the Deployment Stats and
`failure_count` has been removed.
- Displays the stats tab by default for expanded rows if the model has
started deployments
@darnautov
Copy link
Contributor Author

💚 All backports created successfully

Status Branch Result
8.9

Note: Successful backport PRs will be merged automatically after passing CI.

Questions ?

Please refer to the Backport tool documentation

darnautov added a commit to darnautov/kibana that referenced this pull request Jun 29, 2023
## Summary

Resolves elastic#157385

Hides inference stats for the PyTorch models.

- The salient information (`inference_count`, `timestamp`) is a repeat
of what is already displayed in the Deployment Stats section.
- `missing_all_fields_count` is confusing as the PyTorch models take a
single input field rather than multiple fields as DFA models do, hence
omitted.
- The deployment stats have an
[error_count](https://www.elastic.co/guide/en/elasticsearch/reference/current/get-trained-models-stats.html)
field, hence it has been added to the Deployment Stats and
`failure_count` has been removed.
- Displays the stats tab by default for expanded rows if the model has
started deployments

(cherry picked from commit 4064e2b)

# Conflicts:
#	x-pack/plugins/ml/public/application/model_management/expanded_row.tsx
darnautov added a commit that referenced this pull request Jun 29, 2023
# Backport

This will backport the following commits from `main` to `8.9`:
- [[ML] Hide inference stats for PyTorch models
(#160599)](#160599)

<!--- Backport version: 8.9.7 -->

### Questions ?
Please refer to the [Backport tool
documentation](https://github.com/sqren/backport)

<!--BACKPORT [{"author":{"name":"Dima
Arnautov","email":"dmitrii.arnautov@elastic.co"},"sourceCommit":{"committedDate":"2023-06-28T12:55:28Z","message":"[ML]
Hide inference stats for PyTorch models (#160599)\n\n##
Summary\r\n\r\nResolves
#157385 inference
stats for the PyTorch models. \r\n\r\n- The salient information
(`inference_count`, `timestamp`) is a repeat\r\nof what is already
displayed in the Deployment Stats section.\r\n-
`missing_all_fields_count` is confusing as the PyTorch models take
a\r\nsingle input field rather than multiple fields as DFA models do,
hence\r\nomitted.\r\n- The deployment stats have
an\r\n[error_count](https://www.elastic.co/guide/en/elasticsearch/reference/current/get-trained-models-stats.html)\r\nfield,
hence it has been added to the Deployment Stats and\r\n`failure_count`
has been removed.\r\n- Displays the stats tab by default for expanded
rows if the model has\r\nstarted
deployments","sha":"4064e2b7d4ea4a9a0c034d8450808f1a542ac0dd","branchLabelMapping":{"^v8.10.0$":"main","^v(\\d+).(\\d+).\\d+$":"$1.$2"}},"sourcePullRequest":{"labels":["release_note:fix",":ml","Feature:3rd
Party
Models","Team:ML","v8.9.0","v8.10.0"],"number":160599,"url":"#160599
Hide inference stats for PyTorch models (#160599)\n\n##
Summary\r\n\r\nResolves
#157385 inference
stats for the PyTorch models. \r\n\r\n- The salient information
(`inference_count`, `timestamp`) is a repeat\r\nof what is already
displayed in the Deployment Stats section.\r\n-
`missing_all_fields_count` is confusing as the PyTorch models take
a\r\nsingle input field rather than multiple fields as DFA models do,
hence\r\nomitted.\r\n- The deployment stats have
an\r\n[error_count](https://www.elastic.co/guide/en/elasticsearch/reference/current/get-trained-models-stats.html)\r\nfield,
hence it has been added to the Deployment Stats and\r\n`failure_count`
has been removed.\r\n- Displays the stats tab by default for expanded
rows if the model has\r\nstarted
deployments","sha":"4064e2b7d4ea4a9a0c034d8450808f1a542ac0dd"}},"sourceBranch":"main","suggestedTargetBranches":["8.9"],"targetPullRequestStates":[{"branch":"8.9","label":"v8.9.0","labelRegex":"^v(\\d+).(\\d+).\\d+$","isSourceBranch":false,"state":"NOT_CREATED"},{"branch":"main","label":"v8.10.0","labelRegex":"^v8.10.0$","isSourceBranch":true,"state":"MERGED","url":"#160599
Hide inference stats for PyTorch models (#160599)\n\n##
Summary\r\n\r\nResolves
#157385 inference
stats for the PyTorch models. \r\n\r\n- The salient information
(`inference_count`, `timestamp`) is a repeat\r\nof what is already
displayed in the Deployment Stats section.\r\n-
`missing_all_fields_count` is confusing as the PyTorch models take
a\r\nsingle input field rather than multiple fields as DFA models do,
hence\r\nomitted.\r\n- The deployment stats have
an\r\n[error_count](https://www.elastic.co/guide/en/elasticsearch/reference/current/get-trained-models-stats.html)\r\nfield,
hence it has been added to the Deployment Stats and\r\n`failure_count`
has been removed.\r\n- Displays the stats tab by default for expanded
rows if the model has\r\nstarted
deployments","sha":"4064e2b7d4ea4a9a0c034d8450808f1a542ac0dd"}}]}]
BACKPORT-->
Sign up for free to join this conversation on GitHub. Already have an account? Sign in to comment
Labels
Projects
None yet
Development

Successfully merging this pull request may close these issues.

[ML] cache_miss_count is highly misleading for PyTorch models
6 participants