Skip to content
New issue

Have a question about this project? Sign up for a free GitHub account to open an issue and contact its maintainers and the community.

By clicking “Sign up for GitHub”, you agree to our terms of service and privacy statement. We’ll occasionally send you account related emails.

Already on GitHub? Sign in to your account

[Search] Use session service on a dashboard #81297

Merged
merged 18 commits into from
Oct 29, 2020
Merged

Conversation

Dosant
Copy link
Contributor

@Dosant Dosant commented Oct 21, 2020

Release note

Adds Search Sessions support to Kibana Dashboards

Summary

Follow up on #76889
Prev version of this pr with some comments: #81192
Part of #61738
Next step: restore searchSessionId from the URL: #81489

Some context for reviewers

This is part of client side background search. To save or restore a background search, searches will be grouped under a session. We implemented session creation on every search in discover in #76889. This pr creates a search session on every dashboard container state change and propagates searchSessionId to embeddable -> search infrastructure.

Implementation notes:

  1. Explicitly passes sessionId as embeddable input:
  • Visualize
  • Lens
  • Search
  • Maps: TODO: probably better done separately
  • Anything else?
  1. For now this session id is only needed to group error toasts. For example, see two errors and a single toast. Before this pr you would see two separate toasts. The underlying code for grouping errors was added in [Search] Client side session service #76889

Screenshot 2020-10-21 at 17 12 13
3. Added sessionId displayed in inspector. Used it it in a functional test. I also think it would be useful for debug and support purposes.
This is how it looks:

Screenshot 2020-10-21 at 17 07 54

Functional test:

Added a test that checks:

  1. In case multiple panels failed, there is only single toast
  2. SessionId in inspector is the same for different panels
  3. After state change and refetch sessionId changes

How to test

  • You go to the dashboard and check that searchSessionId changes on different state changes. You can use inspector.

Open questions / edge cases to iterate over later

  • Should refresh interval create a new session Id? - now it doesn't on a dashboard
  • Internal embeddable state changes, like own filters, wouldn't create a new sessionId. This is fine.

Next steps

Checklist

Delete any items that are not applicable to this PR.

@Dosant Dosant changed the title [Search] Use session service on a dashboard [Search] Use session service on a dashboard 2 Oct 21, 2020
@Dosant Dosant changed the title [Search] Use session service on a dashboard 2 [Search] Use session service on a dashboard Oct 21, 2020
@Dosant Dosant added Feature:Search Querying infrastructure in Kibana Team:AppArch v7.11.0 v8.0.0 labels Oct 21, 2020
@@ -23,7 +26,7 @@ export default function ({ getService, getPageObjects }: FtrProviderContext) {

it('delayed should load', async () => {
await PageObjects.common.navigateToApp('dashboard');
await PageObjects.dashboard.gotoDashboardEditMode('Delayed 5s');
await PageObjects.dashboard.loadSavedDashboard('Delayed 5s');
Copy link
Contributor Author

@Dosant Dosant Oct 21, 2020

Choose a reason for hiding this comment

The reason will be displayed to describe this comment to others. Learn more.

@lukasolson, I doubt that gotoDashboardEditMode was intentional, so I changed it to loadSavedDashboard

loadSavedDashboard is faster because we don't have to wait for initial load and then for a 2nd load after switching to edit mode.

@Dosant Dosant added the release_note:skip Skip the PR/issue when compiling release notes label Oct 21, 2020
@@ -1109,12 +1111,6 @@ export class DashboardAppController {
$scope.model.filters = filterManager.getFilters();
$scope.model.query = queryStringManager.getQuery();
dashboardStateManager.applyFilters($scope.model.query, $scope.model.filters);
if (dashboardContainer) {
dashboardContainer.updateInput({
Copy link
Contributor Author

Choose a reason for hiding this comment

The reason will be displayed to describe this comment to others. Learn more.

This is not needed. After removing all dashboard container updates are handled in single place right now.
Looking forward to this file being refactored.

Copy link
Contributor

Choose a reason for hiding this comment

The reason will be displayed to describe this comment to others. Learn more.

I'm in the middle of completely re-doing this file as part of deangularization. It will be a functional component - I may end up pinging you for some input at some point.

Copy link
Contributor Author

@Dosant Dosant Oct 27, 2020

Choose a reason for hiding this comment

The reason will be displayed to describe this comment to others. Learn more.

@ThomThomson, that's great news 👍 Will be glad to chime in.
From perspective of this pr important bit is that we are calling dashboardContainer.updateInput in a single place and we assume this is when we want to start a new search session.

@Dosant Dosant marked this pull request as ready for review October 26, 2020 12:33
@Dosant Dosant requested a review from a team October 26, 2020 12:33
@Dosant Dosant requested review from a team as code owners October 26, 2020 12:33
@botelastic botelastic bot added the Feature:Embedding Embedding content via iFrame label Oct 26, 2020
Copy link
Member

@lukasolson lukasolson left a comment

Choose a reason for hiding this comment

The reason will be displayed to describe this comment to others. Learn more.

LGTM, verified that the searchSessionId seems to be set properly across visualizations that use SearchSource. It's updated properly when the filter/query/time range is updated and a new query is sent out.

@@ -153,6 +153,21 @@ export class RequestsViewComponent extends Component<InspectorViewProps, Request
</EuiText>
)}

{this.state.request && this.state.request.searchSessionId && (
Copy link
Member

Choose a reason for hiding this comment

The reason will be displayed to describe this comment to others. Learn more.

Is this only for the functional tests, or do you think there is value in showing this parameter to the user?

Copy link
Contributor Author

Choose a reason for hiding this comment

The reason will be displayed to describe this comment to others. Learn more.

I think it would also be useful for debugging and support purposes

Copy link
Contributor

@ThomThomson ThomThomson left a comment

Choose a reason for hiding this comment

The reason will be displayed to describe this comment to others. Learn more.

Code & behaviours LGTM.

Tested locally in chrome by making changes in the dashboard app and ensuring that every embeddable uses the same sessionId, and that the sessionId changes correctly with every change.

@Dosant
Copy link
Contributor Author

Dosant commented Oct 28, 2020

@elasticmachine merge upstream

@flash1293
Copy link
Contributor

There's something weird going on I think. If I hit the refresh button, the Aborted label flashes in saved searches:
abortedflash

I debugged into the Lens embeddable and for a single hit of refresh, it's calling render twice - once with the old session id, then with the new session id (it's probably the same for the saved search, that's why the label flashes). On master it's just called once.

@@ -19,5 +19,6 @@ export declare type EmbeddableInput = {
timeRange?: TimeRange;
query?: Query;
filters?: Filter[];
searchSessionId?: string;
Copy link
Contributor

Choose a reason for hiding this comment

The reason will be displayed to describe this comment to others. Learn more.

This is a major nit, but do we really want to have the name be so specific?
I think sessions can and will be used for more than search.

Copy link
Contributor Author

@Dosant Dosant Oct 29, 2020

Choose a reason for hiding this comment

The reason will be displayed to describe this comment to others. Learn more.

I think sessionId is way too vague for dashboards / embeddable level, but OK on search level.
sessionId on dashboard could mean anything:

  • user authentication session
  • dashboard page session (created once when user loaded a dashboard)
  • dashboard app session (to track drill-downs between dashboards without navigating away from a dashboard app)
  • and the search / fetch session.

Copy link
Contributor

Choose a reason for hiding this comment

The reason will be displayed to describe this comment to others. Learn more.

Agreed, if this terminology is for using in solutions :)

@Dosant
Copy link
Contributor Author

Dosant commented Oct 29, 2020

@flash1293,

There's something weird going on I think. If I hit the refresh button, the Aborted label flashes in saved searches:
abortedflash

I debugged into the Lens embeddable and for a single hit of refresh, it's calling render twice - once with the old session id, then with the new session id (it's probably the same for the saved search, that's why the label flashes). On master it's just called once.

Thanks for pointing that out. There were two issue I found and fixed, let's see what tests say...

  1. when embeddable was updated with input: {lastRequestTime, searchSessionId} first it was reload with OLDER input. and then input was updated with searchSessionId. SearchSessionId change was triggering the render again. I changed the order, so now reload is called after input is update.
  2. Initially I made embeddables to update when searchSessionId changes, but this was causing issue that in case of input update like: {lastRequestTime, searchSessionId} we trigger two renders if embeddable is not handling lastRequestTime comparison internally. Lens does, but other don't. I decided to revert searchSessionId checking in internal embeddable state change detection and just forward this.input.searchSessionId during rendering assuming that searchSessionId can never change by itself (which seems to be the case). Other option could be to leave searchSessionId as part of embeddable change detection, but then also add lastRequestTime tracking to other embeddable like lens does.

Copy link
Contributor

@lizozom lizozom left a comment

Choose a reason for hiding this comment

The reason will be displayed to describe this comment to others. Learn more.

Looks good.
Added a couple of suggestions

@@ -374,6 +374,7 @@ export class VisualizeEmbeddable
query: this.input.query,
filters: this.input.filters,
},
searchSessionId: this.input.searchSessionId,
Copy link
Contributor

@lizozom lizozom Oct 29, 2020

Choose a reason for hiding this comment

The reason will be displayed to describe this comment to others. Learn more.

If we pass in the searchSessionId from the visualize app, would that enable sessions for that app too?

(Not in this PR)

return searchSessionId;
};

const panel1SessionId1 = await getSearchSessionIdByPanel('Sum of Bytes by Extension');
Copy link
Contributor

Choose a reason for hiding this comment

The reason will be displayed to describe this comment to others. Learn more.

Would it make sense to add tests to test/plugin_functional/test_suites/data_plugin/session.ts to make sure that sessions are being created once per load \ reload?

It was what uncovered the problematic behaviors in Dashboard in the first place.

Copy link
Contributor Author

Choose a reason for hiding this comment

The reason will be displayed to describe this comment to others. Learn more.

I will take a look 👍
but would like to do it in separate pr,
really want to unblock a bunch of follow up prs with merging this :(

@Dosant
Copy link
Contributor Author

Dosant commented Oct 29, 2020

@elasticmachine merge upstream

Copy link
Contributor

@flash1293 flash1293 left a comment

Choose a reason for hiding this comment

The reason will be displayed to describe this comment to others. Learn more.

Lens and saved searches LGTM - it seems like the search session id is passed through correctly. It seems like Discover is fetching twice on changing the time range, but that happens on master as well. I will open a separate issue for this.

@kibanamachine
Copy link
Contributor

kibanamachine commented Feb 22, 2021

💔 Build Failed

Failed CI Steps


Test Failures

Firefox UI Functional Tests.test/functional/apps/dashboard/dashboard_save·js.dashboard app using legacy data dashboard save warns on duplicate name for new dashboard

Link to Jenkins

Standard Out

Failed Tests Reporter:
  - Test has failed 3 times on tracked branches: https://github.com/elastic/kibana/issues/41096

[00:00:00]       │
[00:00:15]         └-: dashboard app
[00:00:15]           └-> "before all" hook
[00:07:55]           └-: using legacy data
[00:07:55]             └-> "before all" hook
[00:07:55]             └-> "before all" hook: loadLogstash
[00:07:55]               │ info [logstash_functional] Loading "mappings.json"
[00:07:55]               │ info [logstash_functional] Loading "data.json.gz"
[00:07:55]               │ info [o.e.c.m.MetadataCreateIndexService] [kibana-ci-immutable-ubuntu-18-tests-xxl-1614018157859894889] [logstash-2015.09.22] creating index, cause [api], templates [], shards [1]/[0]
[00:07:55]               │ info [o.e.c.r.a.AllocationService] [kibana-ci-immutable-ubuntu-18-tests-xxl-1614018157859894889] current.health="GREEN" message="Cluster health status changed from [YELLOW] to [GREEN] (reason: [shards started [[logstash-2015.09.22][0]]])." previous.health="YELLOW" reason="shards started [[logstash-2015.09.22][0]]"
[00:07:55]               │ info [logstash_functional] Created index "logstash-2015.09.22"
[00:07:55]               │ debg [logstash_functional] "logstash-2015.09.22" settings {"index":{"analysis":{"analyzer":{"url":{"max_token_length":"1000","tokenizer":"uax_url_email","type":"standard"}}},"number_of_replicas":"0","number_of_shards":"1"}}
[00:07:55]               │ info [o.e.c.m.MetadataCreateIndexService] [kibana-ci-immutable-ubuntu-18-tests-xxl-1614018157859894889] [logstash-2015.09.20] creating index, cause [api], templates [], shards [1]/[0]
[00:07:55]               │ info [o.e.c.r.a.AllocationService] [kibana-ci-immutable-ubuntu-18-tests-xxl-1614018157859894889] current.health="GREEN" message="Cluster health status changed from [YELLOW] to [GREEN] (reason: [shards started [[logstash-2015.09.20][0]]])." previous.health="YELLOW" reason="shards started [[logstash-2015.09.20][0]]"
[00:07:55]               │ info [logstash_functional] Created index "logstash-2015.09.20"
[00:07:55]               │ debg [logstash_functional] "logstash-2015.09.20" settings {"index":{"analysis":{"analyzer":{"url":{"max_token_length":"1000","tokenizer":"uax_url_email","type":"standard"}}},"number_of_replicas":"0","number_of_shards":"1"}}
[00:07:55]               │ info [o.e.c.m.MetadataCreateIndexService] [kibana-ci-immutable-ubuntu-18-tests-xxl-1614018157859894889] [logstash-2015.09.21] creating index, cause [api], templates [], shards [1]/[0]
[00:07:56]               │ info [o.e.c.r.a.AllocationService] [kibana-ci-immutable-ubuntu-18-tests-xxl-1614018157859894889] current.health="GREEN" message="Cluster health status changed from [YELLOW] to [GREEN] (reason: [shards started [[logstash-2015.09.21][0]]])." previous.health="YELLOW" reason="shards started [[logstash-2015.09.21][0]]"
[00:07:56]               │ info [logstash_functional] Created index "logstash-2015.09.21"
[00:07:56]               │ debg [logstash_functional] "logstash-2015.09.21" settings {"index":{"analysis":{"analyzer":{"url":{"max_token_length":"1000","tokenizer":"uax_url_email","type":"standard"}}},"number_of_replicas":"0","number_of_shards":"1"}}
[00:08:05]               │ info progress: 12339
[00:08:07]               │ info [logstash_functional] Indexed 4633 docs into "logstash-2015.09.22"
[00:08:07]               │ info [logstash_functional] Indexed 4757 docs into "logstash-2015.09.20"
[00:08:07]               │ info [logstash_functional] Indexed 4614 docs into "logstash-2015.09.21"
[00:08:07]             └-: dashboard save
[00:08:07]               └-> "before all" hook
[00:08:07]               └-> "before all" hook
[00:08:07]                 │ debg load kibana index with visualizations and log data
[00:08:07]                 │ info [dashboard/legacy] Loading "mappings.json"
[00:08:07]                 │ info [dashboard/legacy] Loading "data.json.gz"
[00:08:07]                 │ info [o.e.c.m.MetadataDeleteIndexService] [kibana-ci-immutable-ubuntu-18-tests-xxl-1614018157859894889] [.kibana_1/7WhsFZutRRCtBvHA_tXBTw] deleting index
[00:08:07]                 │ info [o.e.c.m.MetadataDeleteIndexService] [kibana-ci-immutable-ubuntu-18-tests-xxl-1614018157859894889] [.kibana_2/YzFHmRZ8Rq-mG-2xIM6A9A] deleting index
[00:08:07]                 │ info [dashboard/legacy] Deleted existing index [".kibana_2",".kibana_1"]
[00:08:07]                 │ info [o.e.c.m.MetadataCreateIndexService] [kibana-ci-immutable-ubuntu-18-tests-xxl-1614018157859894889] [.kibana] creating index, cause [api], templates [], shards [1]/[1]
[00:08:07]                 │ info [dashboard/legacy] Created index ".kibana"
[00:08:07]                 │ debg [dashboard/legacy] ".kibana" settings {"index":{"number_of_replicas":"1","number_of_shards":"1"}}
[00:08:07]                 │ info [dashboard/legacy] Indexed 9 docs into ".kibana"
[00:08:07]                 │ info [o.e.c.m.MetadataMappingService] [kibana-ci-immutable-ubuntu-18-tests-xxl-1614018157859894889] [.kibana/KbKjYTczSiKrHpf6oE0rkQ] update_mapping [_doc]
[00:08:07]                 │ debg Migrating saved objects
[00:08:07]                 │ proc [kibana]   log   [18:51:30.368] [info][savedobjects-service] Creating index .kibana_2.
[00:08:07]                 │ info [o.e.c.m.MetadataCreateIndexService] [kibana-ci-immutable-ubuntu-18-tests-xxl-1614018157859894889] [.kibana_2] creating index, cause [api], templates [], shards [1]/[1]
[00:08:07]                 │ info [o.e.c.r.a.AllocationService] [kibana-ci-immutable-ubuntu-18-tests-xxl-1614018157859894889] updating number_of_replicas to [0] for indices [.kibana_2]
[00:08:07]                 │ proc [kibana]   log   [18:51:30.417] [info][savedobjects-service] Reindexing .kibana to .kibana_1
[00:08:07]                 │ info [o.e.c.m.MetadataCreateIndexService] [kibana-ci-immutable-ubuntu-18-tests-xxl-1614018157859894889] [.kibana_1] creating index, cause [api], templates [], shards [1]/[1]
[00:08:07]                 │ info [o.e.c.r.a.AllocationService] [kibana-ci-immutable-ubuntu-18-tests-xxl-1614018157859894889] updating number_of_replicas to [0] for indices [.kibana_1]
[00:08:08]                 │ info [o.e.c.m.MetadataCreateIndexService] [kibana-ci-immutable-ubuntu-18-tests-xxl-1614018157859894889] [.tasks] creating index, cause [auto(bulk api)], templates [], shards [1]/[1]
[00:08:08]                 │ info [o.e.c.r.a.AllocationService] [kibana-ci-immutable-ubuntu-18-tests-xxl-1614018157859894889] updating number_of_replicas to [0] for indices [.tasks]
[00:08:08]                 │ info [o.e.t.LoggingTaskListener] [kibana-ci-immutable-ubuntu-18-tests-xxl-1614018157859894889] 7811 finished with response BulkByScrollResponse[took=40.9ms,timed_out=false,sliceId=null,updated=0,created=9,deleted=0,batches=1,versionConflicts=0,noops=0,retries=0,throttledUntil=0s,bulk_failures=[],search_failures=[]]
[00:08:08]                 │ info [o.e.c.m.MetadataDeleteIndexService] [kibana-ci-immutable-ubuntu-18-tests-xxl-1614018157859894889] [.kibana/KbKjYTczSiKrHpf6oE0rkQ] deleting index
[00:08:08]                 │ proc [kibana]   log   [18:51:30.775] [info][savedobjects-service] Migrating .kibana_1 saved objects to .kibana_2
[00:08:08]                 │ info [o.e.c.m.MetadataMappingService] [kibana-ci-immutable-ubuntu-18-tests-xxl-1614018157859894889] [.kibana_2/ljRtkUSrRYCtsk_lOBsSZA] update_mapping [_doc]
[00:08:08]                 │ info [o.e.c.m.MetadataMappingService] [kibana-ci-immutable-ubuntu-18-tests-xxl-1614018157859894889] [.kibana_2/ljRtkUSrRYCtsk_lOBsSZA] update_mapping [_doc]
[00:08:08]                 │ proc [kibana]   log   [18:51:30.861] [info][savedobjects-service] Pointing alias .kibana to .kibana_2.
[00:08:08]                 │ proc [kibana]   log   [18:51:30.888] [info][savedobjects-service] Finished in 522ms.
[00:08:08]                 │ debg applying update to kibana config: {"accessibility:disableAnimations":true,"dateFormat:tz":"UTC"}
[00:08:08]                 │ info [o.e.c.m.MetadataMappingService] [kibana-ci-immutable-ubuntu-18-tests-xxl-1614018157859894889] [.kibana_2/ljRtkUSrRYCtsk_lOBsSZA] update_mapping [_doc]
[00:08:09]                 │ debg replacing kibana config doc: {"defaultIndex":"logstash-*"}
[00:08:10]                 │ debg navigating to dashboard url: http://localhost:61201/app/dashboards#/list
[00:08:10]                 │ debg navigate to: http://localhost:61201/app/dashboards#/list
[00:08:11]                 │ debg browser[log] "^ A single error about an inline script not firing due to content security policy is expected!"
[00:08:11]                 │ debg ... sleep(700) start
[00:08:11]                 │ debg ... sleep(700) end
[00:08:11]                 │ debg returned from get, calling refresh
[00:08:12]                 │ debg browser[log] "^ A single error about an inline script not firing due to content security policy is expected!"
[00:08:12]                 │ debg currentUrl = http://localhost:61201/app/dashboards#/list
[00:08:12]                 │          appUrl = http://localhost:61201/app/dashboards#/list
[00:08:12]                 │ debg TestSubjects.find(kibanaChrome)
[00:08:12]                 │ debg Find.findByCssSelector('[data-test-subj="kibanaChrome"]') with timeout=60000
[00:08:13]                 │ debg ... sleep(501) start
[00:08:13]                 │ debg ... sleep(501) end
[00:08:13]                 │ debg in navigateTo url = http://localhost:61201/app/dashboards#/list?_g=(filters:!(),refreshInterval:(pause:!t,value:0),time:(from:now-15m,to:now))
[00:08:13]                 │ debg --- retry.try error: URL changed, waiting for it to settle
[00:08:14]                 │ debg ... sleep(501) start
[00:08:14]                 │ debg ... sleep(501) end
[00:08:14]                 │ debg in navigateTo url = http://localhost:61201/app/dashboards#/list?_g=(filters:!(),refreshInterval:(pause:!t,value:0),time:(from:now-15m,to:now))
[00:08:14]                 │ debg TestSubjects.exists(statusPageContainer)
[00:08:14]                 │ debg Find.existsByDisplayedByCssSelector('[data-test-subj="statusPageContainer"]') with timeout=2500
[00:08:17]                 │ debg --- retry.tryForTime error: [data-test-subj="statusPageContainer"] is not displayed
[00:08:17]               └-> warns on duplicate name for new dashboard
[00:08:17]                 └-> "before each" hook: global before each
[00:08:17]                 │ debg TestSubjects.exists(newItemButton)
[00:08:17]                 │ debg Find.existsByDisplayedByCssSelector('[data-test-subj="newItemButton"]') with timeout=10000
[00:08:20]                 │ debg --- retry.tryForTime error: [data-test-subj="newItemButton"] is not displayed
[00:08:23]                 │ debg --- retry.tryForTime failed again with the same message...
[00:08:26]                 │ debg --- retry.tryForTime failed again with the same message...
[00:08:29]                 │ debg --- retry.tryForTime failed again with the same message...
[00:08:29]                 │ debg TestSubjects.click(createDashboardPromptButton)
[00:08:29]                 │ debg Find.clickByCssSelector('[data-test-subj="createDashboardPromptButton"]') with timeout=10000
[00:08:29]                 │ debg Find.findByCssSelector('[data-test-subj="createDashboardPromptButton"]') with timeout=10000
[00:08:30]                 │ debg waitForRenderComplete
[00:08:30]                 │ debg in getSharedItemsCount
[00:08:30]                 │ debg Find.findByCssSelector('[data-shared-items-count]') with timeout=10000
[00:08:30]                 │ debg Renderable.waitForRender for 0 elements
[00:08:30]                 │ debg Find.allByCssSelector('[data-render-complete="true"]') with timeout=10000
[00:08:47]                 │ proc [kibana]   log   [18:52:10.140] [error][data][elasticsearch] [ConnectionError]: socket hang up
[00:08:47]                 │ proc [kibana]   log   [18:52:10.198] [error][savedobjects-service] Unable to retrieve version information from Elasticsearch nodes.
[00:08:48]                 │ debg Find.allByCssSelector('[data-loading]') with timeout=1000
[00:08:50]                 │ proc [kibana]   log   [18:52:12.621] [error][data][elasticsearch] [ConnectionError]: connect ECONNREFUSED 127.0.0.1:61202
[00:08:50]                 │ debg TestSubjects.click(dashboardSaveMenuItem)
[00:08:50]                 │ debg Find.clickByCssSelector('[data-test-subj="dashboardSaveMenuItem"]') with timeout=10000
[00:08:50]                 │ debg Find.findByCssSelector('[data-test-subj="dashboardSaveMenuItem"]') with timeout=10000
[00:08:51]                 │ debg TestSubjects.find(savedObjectSaveModal)
[00:08:51]                 │ debg Find.findByCssSelector('[data-test-subj="savedObjectSaveModal"]') with timeout=10000
[00:08:51]                 │ debg entering new title
[00:08:51]                 │ debg TestSubjects.setValue(savedObjectTitle, Dashboard Save Test)
[00:08:51]                 │ debg TestSubjects.click(savedObjectTitle)
[00:08:51]                 │ debg Find.clickByCssSelector('[data-test-subj="savedObjectTitle"]') with timeout=10000
[00:08:51]                 │ debg Find.findByCssSelector('[data-test-subj="savedObjectTitle"]') with timeout=10000
[00:08:52]                 │ debg DashboardPage.clickSave
[00:08:52]                 │ debg TestSubjects.click(confirmSaveSavedObjectButton)
[00:08:52]                 │ debg Find.clickByCssSelector('[data-test-subj="confirmSaveSavedObjectButton"]') with timeout=10000
[00:08:52]                 │ debg Find.findByCssSelector('[data-test-subj="confirmSaveSavedObjectButton"]') with timeout=10000
[00:08:52]                 │ debg Find.waitForElementStale with timeout=10000
[00:08:52]                 │ proc [kibana]   log   [18:52:15.040] [error][data][elasticsearch] [ConnectionError]: connect ECONNREFUSED 127.0.0.1:61202
[00:08:52]                 │ proc [kibana]   log   [18:52:15.119] [error][data][elasticsearch] [ConnectionError]: connect ECONNREFUSED 127.0.0.1:61202
[00:08:52]                 │ debg TestSubjects.exists(saveDashboardSuccess)
[00:08:52]                 │ debg Find.existsByDisplayedByCssSelector('[data-test-subj="saveDashboardSuccess"]') with timeout=120000
[00:08:55]                 │ proc [kibana]   log   [18:52:17.625] [error][data][elasticsearch] [ConnectionError]: connect ECONNREFUSED 127.0.0.1:61202
[00:08:55]                 │ debg --- retry.tryForTime error: [data-test-subj="saveDashboardSuccess"] is not displayed
[00:08:58]                 │ proc [kibana]   log   [18:52:20.678] [error][data][elasticsearch] [ConnectionError]: connect ECONNREFUSED 127.0.0.1:61202
[00:09:00]                 │ proc [kibana]   log   [18:52:23.123] [error][data][elasticsearch] [ConnectionError]: connect ECONNREFUSED 127.0.0.1:61202
[00:09:05]                 │ proc [kibana]   log   [18:52:26.663] [error][data][elasticsearch] [ConnectionError]: connect ECONNREFUSED 127.0.0.1:61202
[00:09:23]                 │ proc [kibana]   log   [18:52:33.702] [error][data][elasticsearch] [ConnectionError]: connect ECONNREFUSED 127.0.0.1:61202
[00:09:23]                 │ proc [kibana]   log   [18:52:45.624] [error][data][elasticsearch] [ConnectionError]: connect ECONNREFUSED 127.0.0.1:61202
[00:09:23]                 │ debg --- retry.tryForTime failed again with the same message...
[00:09:24]                 │ proc [kibana]   log   [18:52:47.164] [error][data][elasticsearch] [ConnectionError]: connect ECONNREFUSED 127.0.0.1:61202
[00:09:27]                 │ proc [kibana]   log   [18:52:49.691] [error][data][elasticsearch] [ConnectionError]: connect ECONNREFUSED 127.0.0.1:61202
[00:09:34]                 │ proc [kibana]   log   [18:52:56.363] [error][data][elasticsearch] [ConnectionError]: connect ECONNREFUSED 127.0.0.1:61202
[00:09:35]                 │ debg --- retry.tryForTime failed again with the same message...
[00:09:36]                 │ proc [kibana]   log   [18:52:58.845] [error][data][elasticsearch] [ConnectionError]: connect ECONNREFUSED 127.0.0.1:61202
[00:09:38]                 │ proc [kibana]   log   [18:53:01.348] [error][data][elasticsearch] [ConnectionError]: connect ECONNREFUSED 127.0.0.1:61202
[00:09:39]                 │ debg --- retry.tryForTime failed again with the same message...
[00:09:41]                 │ proc [kibana]   log   [18:53:03.851] [error][data][elasticsearch] [ConnectionError]: connect ECONNREFUSED 127.0.0.1:61202
[00:09:42]                 │ debg --- retry.tryForTime failed again with the same message...
[00:09:43]                 │ proc [kibana]   log   [18:53:05.723] [error][data][elasticsearch] [ConnectionError]: connect ECONNREFUSED 127.0.0.1:61202
[00:09:43]                 │ proc [kibana]   log   [18:53:05.724] [error][data][elasticsearch] [ConnectionError]: connect ECONNREFUSED 127.0.0.1:61202
[00:09:43]                 │ proc [kibana]   log   [18:53:05.725] [error][data][elasticsearch] [ConnectionError]: connect ECONNREFUSED 127.0.0.1:61202
[00:09:43]                 │ proc [kibana]   log   [18:53:06.351] [error][data][elasticsearch] [ConnectionError]: connect ECONNREFUSED 127.0.0.1:61202
[00:09:45]                 │ debg --- retry.tryForTime failed again with the same message...
[00:09:46]                 │ proc [kibana]   log   [18:53:08.852] [error][data][elasticsearch] [ConnectionError]: connect ECONNREFUSED 127.0.0.1:61202
[00:09:48]                 │ debg --- retry.tryForTime failed again with the same message...
[00:09:48]                 │ proc [kibana]   log   [18:53:11.352] [error][data][elasticsearch] [ConnectionError]: connect ECONNREFUSED 127.0.0.1:61202
[00:09:51]                 │ proc [kibana]   log   [18:53:13.854] [error][data][elasticsearch] [ConnectionError]: connect ECONNREFUSED 127.0.0.1:61202
[00:09:51]                 │ debg --- retry.tryForTime failed again with the same message...
[00:09:53]                 │ proc [kibana]   log   [18:53:16.354] [error][data][elasticsearch] [ConnectionError]: connect ECONNREFUSED 127.0.0.1:61202
[00:09:54]                 │ debg --- retry.tryForTime failed again with the same message...
[00:09:56]                 │ proc [kibana]   log   [18:53:18.854] [error][data][elasticsearch] [ConnectionError]: connect ECONNREFUSED 127.0.0.1:61202
[00:09:57]                 │ debg --- retry.tryForTime failed again with the same message...
[00:09:58]                 │ proc [kibana]   log   [18:53:21.353] [error][data][elasticsearch] [ConnectionError]: connect ECONNREFUSED 127.0.0.1:61202
[00:10:00]                 │ debg --- retry.tryForTime failed again with the same message...
[00:10:01]                 │ proc [kibana]   log   [18:53:23.854] [error][data][elasticsearch] [ConnectionError]: connect ECONNREFUSED 127.0.0.1:61202
[00:10:03]                 │ debg --- retry.tryForTime failed again with the same message...
[00:10:03]                 │ proc [kibana]   log   [18:53:26.357] [error][data][elasticsearch] [ConnectionError]: connect ECONNREFUSED 127.0.0.1:61202
[00:10:06]                 │ proc [kibana]   log   [18:53:28.858] [error][data][elasticsearch] [ConnectionError]: connect ECONNREFUSED 127.0.0.1:61202
[00:12:24]                 │ proc [kibana]   log   [18:55:47.262] [error][data][elasticsearch] [ConnectionError]: connect ECONNREFUSED 127.0.0.1:61202
[00:12:26]                 │ proc [kibana]   log   [18:55:48.544] [error][data][elasticsearch] [ConnectionError]: connect ECONNREFUSED 127.0.0.1:61202
[00:14:06]                 │ proc [kibana]   log   [18:57:20.756] [error][data][elasticsearch] [ConnectionError]: connect ECONNREFUSED 127.0.0.1:61202
[00:37:44]                 └- ✖ fail: dashboard app using legacy data dashboard save warns on duplicate name for new dashboard
[00:37:44]                 │      Error: Timeout of 360000ms exceeded. For async tests and hooks, ensure "done()" is called; if returning a Promise, ensure it resolves. (/dev/shm/workspace/parallel/20/kibana/test/functional/apps/dashboard/dashboard_save.js)
[00:37:44]                 │   
[00:37:44]                 │ 
[00:37:44]                 │ 

Stack Trace

[Error: Timeout of 360000ms exceeded. For async tests and hooks, ensure "done()" is called; if returning a Promise, ensure it resolves. (/dev/shm/workspace/parallel/20/kibana/test/functional/apps/dashboard/dashboard_save.js)]

Firefox UI Functional Tests.test/functional/apps/dashboard/index·js.dashboard app using legacy data "after all" hook: unloadLogstash in "using legacy data"

Link to Jenkins

Standard Out

Failed Tests Reporter:
  - Failure is likely irrelevant

[00:00:00]       │
[00:00:15]         └-: dashboard app
[00:00:15]           └-> "before all" hook
[00:07:55]           └-: using legacy data
[00:07:55]             └-> "before all" hook
[00:07:55]             └-> "before all" hook: loadLogstash
[00:07:55]               │ info [logstash_functional] Loading "mappings.json"
[00:07:55]               │ info [logstash_functional] Loading "data.json.gz"
[00:07:55]               │ info [o.e.c.m.MetadataCreateIndexService] [kibana-ci-immutable-ubuntu-18-tests-xxl-1614018157859894889] [logstash-2015.09.22] creating index, cause [api], templates [], shards [1]/[0]
[00:07:55]               │ info [o.e.c.r.a.AllocationService] [kibana-ci-immutable-ubuntu-18-tests-xxl-1614018157859894889] current.health="GREEN" message="Cluster health status changed from [YELLOW] to [GREEN] (reason: [shards started [[logstash-2015.09.22][0]]])." previous.health="YELLOW" reason="shards started [[logstash-2015.09.22][0]]"
[00:07:55]               │ info [logstash_functional] Created index "logstash-2015.09.22"
[00:07:55]               │ debg [logstash_functional] "logstash-2015.09.22" settings {"index":{"analysis":{"analyzer":{"url":{"max_token_length":"1000","tokenizer":"uax_url_email","type":"standard"}}},"number_of_replicas":"0","number_of_shards":"1"}}
[00:07:55]               │ info [o.e.c.m.MetadataCreateIndexService] [kibana-ci-immutable-ubuntu-18-tests-xxl-1614018157859894889] [logstash-2015.09.20] creating index, cause [api], templates [], shards [1]/[0]
[00:07:55]               │ info [o.e.c.r.a.AllocationService] [kibana-ci-immutable-ubuntu-18-tests-xxl-1614018157859894889] current.health="GREEN" message="Cluster health status changed from [YELLOW] to [GREEN] (reason: [shards started [[logstash-2015.09.20][0]]])." previous.health="YELLOW" reason="shards started [[logstash-2015.09.20][0]]"
[00:07:55]               │ info [logstash_functional] Created index "logstash-2015.09.20"
[00:07:55]               │ debg [logstash_functional] "logstash-2015.09.20" settings {"index":{"analysis":{"analyzer":{"url":{"max_token_length":"1000","tokenizer":"uax_url_email","type":"standard"}}},"number_of_replicas":"0","number_of_shards":"1"}}
[00:07:55]               │ info [o.e.c.m.MetadataCreateIndexService] [kibana-ci-immutable-ubuntu-18-tests-xxl-1614018157859894889] [logstash-2015.09.21] creating index, cause [api], templates [], shards [1]/[0]
[00:07:56]               │ info [o.e.c.r.a.AllocationService] [kibana-ci-immutable-ubuntu-18-tests-xxl-1614018157859894889] current.health="GREEN" message="Cluster health status changed from [YELLOW] to [GREEN] (reason: [shards started [[logstash-2015.09.21][0]]])." previous.health="YELLOW" reason="shards started [[logstash-2015.09.21][0]]"
[00:07:56]               │ info [logstash_functional] Created index "logstash-2015.09.21"
[00:07:56]               │ debg [logstash_functional] "logstash-2015.09.21" settings {"index":{"analysis":{"analyzer":{"url":{"max_token_length":"1000","tokenizer":"uax_url_email","type":"standard"}}},"number_of_replicas":"0","number_of_shards":"1"}}
[00:08:05]               │ info progress: 12339
[00:08:07]               │ info [logstash_functional] Indexed 4633 docs into "logstash-2015.09.22"
[00:08:07]               │ info [logstash_functional] Indexed 4757 docs into "logstash-2015.09.20"
[00:08:07]               │ info [logstash_functional] Indexed 4614 docs into "logstash-2015.09.21"
[00:37:45]             └-> "after all" hook: unloadLogstash
[00:37:45]               │ info [logstash_functional] Unloading indices from "mappings.json"
[00:37:45]               │ info Taking screenshot "/dev/shm/workspace/parallel/20/kibana/test/functional/screenshots/failure/dashboard app using legacy data _after all_ hook_ unloadLogstash.png"
[00:37:45]               │ info Current URL is: http://localhost:61201/app/dashboards#/create?_g=(filters:!(),refreshInterval:(pause:!t,value:0),time:(from:now-15m,to:now))&_a=(description:%27%27,filters:!(),fullScreenMode:!f,options:(hidePanelTitles:!f,useMargins:!t),panels:!(),query:(language:kuery,query:%27%27),timeRestore:!f,title:%27%27,viewMode:edit)
[00:37:45]               │ info Saving page source to: /dev/shm/workspace/parallel/20/kibana/test/functional/failure_debug/html/dashboard app using legacy data _after all_ hook_ unloadLogstash.html
[00:37:45]               └- ✖ fail: dashboard app using legacy data "after all" hook: unloadLogstash in "using legacy data"
[00:37:45]               │      Error: No Living connections
[00:37:45]               │       at sendReqWithConnection (/dev/shm/workspace/kibana/node_modules/elasticsearch/src/lib/transport.js:266:15)
[00:37:45]               │       at next (/dev/shm/workspace/kibana/node_modules/elasticsearch/src/lib/connection_pool.js:243:7)
[00:37:45]               │       at process._tickCallback (internal/process/next_tick.js:61:11)
[00:37:45]               │ 
[00:37:45]               │ 

Stack Trace

{ Error: No Living connections
    at sendReqWithConnection (/dev/shm/workspace/kibana/node_modules/elasticsearch/src/lib/transport.js:266:15)
    at next (/dev/shm/workspace/kibana/node_modules/elasticsearch/src/lib/connection_pool.js:243:7)
    at process._tickCallback (internal/process/next_tick.js:61:11)
  message: 'No Living connections',
  body: undefined,
  status: undefined }

Chrome UI Functional Tests.test/functional/apps/visualize/index·ts.visualize app "before all" hook in "visualize app"

Link to Jenkins

Standard Out

Failed Tests Reporter:
  - Test has failed 1 times on tracked branches: https://github.com/elastic/kibana/issues/50337

[00:00:00]       │
[00:00:00]         └-: visualize app
[00:00:00]           └-> "before all" hook
[00:00:00]           └-> "before all" hook
[00:00:00]             │ debg Starting visualize before method
[00:00:00]             │ info [logstash_functional] Loading "mappings.json"
[00:00:00]             │ info [logstash_functional] Loading "data.json.gz"
[00:00:00]             │ info [o.e.c.m.MetadataCreateIndexService] [kibana-ci-immutable-ubuntu-18-tests-xxl-1614018157859894889] [logstash-2015.09.22] creating index, cause [api], templates [], shards [1]/[0]
[00:00:00]             │ info [o.e.c.r.a.AllocationService] [kibana-ci-immutable-ubuntu-18-tests-xxl-1614018157859894889] current.health="GREEN" message="Cluster health status changed from [YELLOW] to [GREEN] (reason: [shards started [[logstash-2015.09.22][0]]])." previous.health="YELLOW" reason="shards started [[logstash-2015.09.22][0]]"
[00:00:00]             │ info [logstash_functional] Created index "logstash-2015.09.22"
[00:00:00]             │ debg [logstash_functional] "logstash-2015.09.22" settings {"index":{"analysis":{"analyzer":{"url":{"max_token_length":"1000","tokenizer":"uax_url_email","type":"standard"}}},"number_of_replicas":"0","number_of_shards":"1"}}
[00:00:00]             │ info [o.e.c.m.MetadataCreateIndexService] [kibana-ci-immutable-ubuntu-18-tests-xxl-1614018157859894889] [logstash-2015.09.20] creating index, cause [api], templates [], shards [1]/[0]
[00:00:00]             │ info [o.e.c.r.a.AllocationService] [kibana-ci-immutable-ubuntu-18-tests-xxl-1614018157859894889] current.health="GREEN" message="Cluster health status changed from [YELLOW] to [GREEN] (reason: [shards started [[logstash-2015.09.20][0]]])." previous.health="YELLOW" reason="shards started [[logstash-2015.09.20][0]]"
[00:00:00]             │ info [logstash_functional] Created index "logstash-2015.09.20"
[00:00:00]             │ debg [logstash_functional] "logstash-2015.09.20" settings {"index":{"analysis":{"analyzer":{"url":{"max_token_length":"1000","tokenizer":"uax_url_email","type":"standard"}}},"number_of_replicas":"0","number_of_shards":"1"}}
[00:00:00]             │ info [o.e.c.m.MetadataCreateIndexService] [kibana-ci-immutable-ubuntu-18-tests-xxl-1614018157859894889] [logstash-2015.09.21] creating index, cause [api], templates [], shards [1]/[0]
[00:00:00]             │ info [o.e.c.r.a.AllocationService] [kibana-ci-immutable-ubuntu-18-tests-xxl-1614018157859894889] current.health="GREEN" message="Cluster health status changed from [YELLOW] to [GREEN] (reason: [shards started [[logstash-2015.09.21][0]]])." previous.health="YELLOW" reason="shards started [[logstash-2015.09.21][0]]"
[00:00:00]             │ info [logstash_functional] Created index "logstash-2015.09.21"
[00:00:00]             │ debg [logstash_functional] "logstash-2015.09.21" settings {"index":{"analysis":{"analyzer":{"url":{"max_token_length":"1000","tokenizer":"uax_url_email","type":"standard"}}},"number_of_replicas":"0","number_of_shards":"1"}}
[00:00:10]             │ info progress: 8422
[00:00:18]             │ info [logstash_functional] Indexed 4633 docs into "logstash-2015.09.22"
[00:00:18]             │ info [logstash_functional] Indexed 4757 docs into "logstash-2015.09.20"
[00:00:18]             │ info [logstash_functional] Indexed 4614 docs into "logstash-2015.09.21"
[00:00:19]             │ info [long_window_logstash] Loading "mappings.json"
[00:00:19]             │ info [long_window_logstash] Loading "data.json.gz"
[00:00:19]             │ info [o.e.c.m.MetadataCreateIndexService] [kibana-ci-immutable-ubuntu-18-tests-xxl-1614018157859894889] [long-window-logstash-0] creating index, cause [api], templates [], shards [1]/[0]
[00:00:19]             │ info [o.e.c.r.a.AllocationService] [kibana-ci-immutable-ubuntu-18-tests-xxl-1614018157859894889] current.health="GREEN" message="Cluster health status changed from [YELLOW] to [GREEN] (reason: [shards started [[long-window-logstash-0][0]]])." previous.health="YELLOW" reason="shards started [[long-window-logstash-0][0]]"
[00:00:19]             │ info [long_window_logstash] Created index "long-window-logstash-0"
[00:00:19]             │ debg [long_window_logstash] "long-window-logstash-0" settings {"index":{"analysis":{"analyzer":{"makelogs_url":{"max_token_length":"1000","tokenizer":"uax_url_email","type":"standard"}}},"number_of_replicas":"0","number_of_shards":"1"}}
[00:00:29]             │ info progress: 7456
[00:00:31]             │ proc [kibana]   log   [18:43:02.593] [error][data][elasticsearch] [ConnectionError]: connect ECONNREFUSED 127.0.0.1:6142
[00:00:31]             │ info Taking screenshot "/dev/shm/workspace/parallel/4/kibana/test/functional/screenshots/failure/visualize app _before all_ hook.png"
[00:00:31]             │ proc [kibana]   log   [18:43:02.623] [error][savedobjects-service] Unable to retrieve version information from Elasticsearch nodes.
[00:00:31]             │ERROR SCREENSHOT FAILED
[00:00:31]             │ERROR WebDriverError: unknown error: session deleted because of page crash
[00:00:31]             │      from unknown error: cannot determine loading status
[00:00:31]             │      from tab crashed
[00:00:31]             │        (Session info: headless chrome=88.0.4324.150)
[00:00:31]             │          at Object.throwDecodedError (/dev/shm/workspace/kibana/node_modules/selenium-webdriver/lib/error.js:550:15)
[00:00:31]             │          at parseHttpResponse (/dev/shm/workspace/kibana/node_modules/selenium-webdriver/lib/http.js:565:13)
[00:00:31]             │          at Executor.execute (/dev/shm/workspace/kibana/node_modules/selenium-webdriver/lib/http.js:491:26)
[00:00:31]             │          at process._tickCallback (internal/process/next_tick.js:68:7)
[00:00:31]             └- ✖ fail: visualize app "before all" hook in "visualize app"
[00:00:31]             │      NoSuchSessionError: invalid session id
[00:00:31]             │       at Object.throwDecodedError (/dev/shm/workspace/kibana/node_modules/selenium-webdriver/lib/error.js:550:15)
[00:00:31]             │       at parseHttpResponse (/dev/shm/workspace/kibana/node_modules/selenium-webdriver/lib/http.js:565:13)
[00:00:31]             │       at Executor.execute (/dev/shm/workspace/kibana/node_modules/selenium-webdriver/lib/http.js:491:26)
[00:00:31]             │       at process._tickCallback (internal/process/next_tick.js:68:7)
[00:00:31]             │ 
[00:00:31]             │ 

Stack Trace

{ NoSuchSessionError: invalid session id
    at Object.throwDecodedError (/dev/shm/workspace/kibana/node_modules/selenium-webdriver/lib/error.js:550:15)
    at parseHttpResponse (/dev/shm/workspace/kibana/node_modules/selenium-webdriver/lib/http.js:565:13)
    at Executor.execute (/dev/shm/workspace/kibana/node_modules/selenium-webdriver/lib/http.js:491:26)
    at process._tickCallback (internal/process/next_tick.js:68:7)
  name: 'NoSuchSessionError',
  remoteStacktrace: '#0 0x562de80dd199 <unknown>\n' }

and 7 more failures, only showing the first 3.

Metrics [docs]

‼️ ERROR: no builds found for mergeBase sha [275c30a]

History

To update your PR or re-run it, just comment with:
@elasticmachine merge upstream

Sign up for free to join this conversation on GitHub. Already have an account? Sign in to comment
Labels
Feature:Embedding Embedding content via iFrame Feature:Search Querying infrastructure in Kibana release_note:feature Makes this part of the condensed release notes v7.11.0 v8.0.0
Projects
None yet
Development

Successfully merging this pull request may close these issues.

None yet

8 participants