Skip to content
New issue

Have a question about this project? Sign up for a free GitHub account to open an issue and contact its maintainers and the community.

By clicking “Sign up for GitHub”, you agree to our terms of service and privacy statement. We’ll occasionally send you account related emails.

Already on GitHub? Sign in to your account

[ML] Explain log rate spikes: Page setup #132121

Merged
merged 22 commits into from
May 20, 2022

Conversation

walterra
Copy link
Contributor

@walterra walterra commented May 12, 2022

Summary

Part of #136265.
Follow up to #131317.
Resolves #132126.

This is the second (and should be last) PR to build out UI/code boilerplate necessary before we start implementing the feature's own UI on a dedicated page.

  • Updates navigation to bring up data view/saved search selection before moving on to the explain log spike rates page.
  • The bar chart race demo page was moved to the aiops/single_endpoint_streaming_demo url. It is kept in this PR so we have two different pages + API endpoints that use streaming. With this still in place it's easier to update the streaming code to be more generic and reusable.
  • The url/page aiops/explain_log_rate_spikes has been added with some dummy request that slowly streams a data view's fields to the client. This page will host the actual UI to be brought over from the PoC in follow ups to this PR.
  • The structure to embed aiops plugin pages in the ml plugin has been simplified. Instead of a lot of custom code to load the components at runtime in the aiops plugin itself, this now uses React lazy loading with Suspense, similar to how we load Vega charts in other places. We no longer initialize the aiops client side code during startup of the plugin itself and augment it, instead we statically import components and pass on props/contexts from the ml plugin.
  • The code to handle streaming chunks on the client side in stream_fetch.ts/use_stream_fetch_reducer.ts has been improved to make better use of TS generics so for a given API endpoint it's able to return the appropriate coresponding return data type and only allows to use the supported reducer actions for that endpoint. Buffering client side actions has been tweaked to handle state updates more quickly if updates from the server are stalling.

Checklist

Delete any items that are not applicable to this PR.

For maintainers

@walterra walterra self-assigned this May 12, 2022
@walterra walterra added release_note:skip Skip the PR/issue when compiling release notes v8.3.0 labels May 12, 2022
@walterra walterra added the :ml label May 12, 2022
@walterra walterra marked this pull request as ready for review May 12, 2022 13:35
@walterra walterra requested a review from a team as a code owner May 12, 2022 13:35
@elasticmachine
Copy link
Contributor

Pinging @elastic/ml-ui (:ml)

@@ -5,6 +5,16 @@
* 2.0.
*/

import { PluginSetup, PluginStart } from '@kbn/data-plugin/server';

export interface AiopsPluginSetupDeps {
Copy link
Contributor

Choose a reason for hiding this comment

The reason will be displayed to describe this comment to others. Learn more.

Does this export, and the one below, contribute to the counts of public APIs which don't have comments?

</LazyWrapper>
);

export const SingleEndpointStreamingDemo: FC = () => (
Copy link
Contributor

Choose a reason for hiding this comment

The reason will be displayed to describe this comment to others. Learn more.

This might need a comment as looks like it's in the public API.

Copy link
Contributor

@peteharverson peteharverson left a comment

Choose a reason for hiding this comment

The reason will be displayed to describe this comment to others. Learn more.

Tested locally. Overall look good. Just found a few issues with the breadcrumbs.

@walterra
Copy link
Contributor Author

@peteharverson Fixed some of the public API issues but wasn't able to resolve everything because it hits uncommented interfaces imported from other plugins.

Copy link
Contributor

@peteharverson peteharverson left a comment

Choose a reason for hiding this comment

The reason will be displayed to describe this comment to others. Learn more.

Tested latest changes and LGTM

Copy link
Contributor

@alvarezmelissa87 alvarezmelissa87 left a comment

Choose a reason for hiding this comment

The reason will be displayed to describe this comment to others. Learn more.

LGTM ⚡

@kibana-ci
Copy link
Collaborator

💚 Build Succeeded

Metrics [docs]

Module Count

Fewer modules leads to a faster build time

id before after diff
aiops 77 78 +1
ml 1610 1612 +2
total +3

Async chunks

Total size of all lazy-loaded chunks that will be downloaded as the user navigates the app

id before after diff
aiops 218.1KB 219.0KB +933.0B
ml 3.3MB 3.3MB +2.0KB
total +2.9KB

Page load bundle

Size of the bundles that are downloaded on every page load. Target size is below 100kb

id before after diff
aiops 3.8KB 3.9KB +34.0B
ml 40.0KB 40.4KB +397.0B
total +431.0B
Unknown metric groups

API count

id before after diff
aiops 10 12 +2

async chunk count

id before after diff
aiops 1 3 +2

ESLint disabled line counts

id before after diff
aiops 3 6 +3

Total ESLint disabled count

id before after diff
aiops 3 6 +3

History

To update your PR or re-run it, just comment with:
@elasticmachine merge upstream

cc @walterra

@walterra walterra merged commit 24bdc97 into elastic:main May 20, 2022
@kibanamachine kibanamachine added the backport:skip This commit does not require backporting label May 20, 2022
j-bennet pushed a commit to j-bennet/kibana that referenced this pull request Jun 2, 2022
Builds out UI/code boilerplate necessary before we start implementing the feature's own UI on a dedicated page.

- Updates navigation to bring up data view/saved search selection before moving on to the explain log spike rates page.
The bar chart race demo page was moved to the aiops/single_endpoint_streaming_demo url. It is kept in this PR so we have two different pages + API endpoints that use streaming. With this still in place it's easier to update the streaming code to be more generic and reusable.
- The url/page aiops/explain_log_rate_spikes has been added with some dummy request that slowly streams a data view's fields to the client. This page will host the actual UI to be brought over from the PoC in follow ups to this PR.
- The structure to embed aiops plugin pages in the ml plugin has been simplified. Instead of a lot of custom code to load the components at runtime in the aiops plugin itself, this now uses React lazy loading with Suspense, similar to how we load Vega charts in other places. We no longer initialize the aiops client side code during startup of the plugin itself and augment it, instead we statically import components and pass on props/contexts from the ml plugin.
- The code to handle streaming chunks on the client side in stream_fetch.ts/use_stream_fetch_reducer.ts has been improved to make better use of TS generics so for a given API endpoint it's able to return the appropriate coresponding return data type and only allows to use the supported reducer actions for that endpoint. Buffering client side actions has been tweaked to handle state updates more quickly if updates from the server are stalling.
@walterra walterra deleted the ml-aiops-plugin-api branch June 21, 2022 13:52
Sign up for free to join this conversation on GitHub. Already have an account? Sign in to comment
Labels
backport:skip This commit does not require backporting :ml release_note:skip Skip the PR/issue when compiling release notes v8.3.0
Projects
None yet
Development

Successfully merging this pull request may close these issues.

[ML] Explain log rate spikes: Page setup
6 participants