-
Notifications
You must be signed in to change notification settings - Fork 8.2k
New issue
Have a question about this project? Sign up for a free GitHub account to open an issue and contact its maintainers and the community.
By clicking “Sign up for GitHub”, you agree to our terms of service and privacy statement. We’ll occasionally send you account related emails.
Already on GitHub? Sign in to your account
Failing test: X-Pack Detection Engine API Integration Tests.x-pack/test/detection_engine_api_integration/security_and_spaces/tests/create_endpoint_exceptions·ts - detection engine api security and spaces enabled Rule exception operators for endpoints "is one of" operator should filter 1 single value if it is set as an exception and the os_type is set to only 1 value #116231
Closed
Tracked by
#161531
Labels
failed-test
A test failure on a tracked branch, potentially flaky-test
Team:Detections and Resp
Security Detection Response Team
Comments
kibanamachine
added
the
failed-test
A test failure on a tracked branch, potentially flaky-test
label
Oct 25, 2021
Pinging @elastic/security-detections-response (Team:Detections and Resp) |
New failure: CI Build - main |
1 task
FrankHassanabad
added a commit
that referenced
this issue
Nov 2, 2021
…oves 200 expect statements (#116987) ## Summary e2e tests are still seeing flake with conflicts and it looks like it _might_ be with querying and not with inserting data. Hard to tell. This PR: * Adds more console logging when the response is not a 200 * Removes the 200 expect statement and hopes for the best but should blow up if it's not 200 in a different way and we will get the console logging statements. * Fixes one other flake with the matrix histogram having different counts. We have encountered this before and are applying the same fix which is to just have it check > 0. * This does fix the timeouts seen where 1 in every 1k rule runs, a rule will not fire until _after_ the 5 minute mark. The timeouts were seen when running the flake runner. Flake failures around `conflict`: #116926 #116904 #116231 Not saying this is going to fix those yet, but it's the last 200 ok's we did an expect on, so it might if we are ignoring the conflict. If it fails again I am hopeful beyond hope that we get the body message and line number within the utilities to determine where/why we are getting these from time to time. It does look to fix the timeouts when a rule misfires and slows down the rate at which we continuously query for rule results. Failure around matrix histogram (The error messages are slightly different on CI each time): #97365 Ran this with the flake runner across groups 11 and 12 100 times each and did not see the conflict crop up: https://buildkite.com/elastic/kibana-flaky-test-suite-runner/builds/128 https://buildkite.com/elastic/kibana-flaky-test-suite-runner/builds/129 The 1 failure in each of those runs were due to something on startup that prevented it from running. ### Checklist Delete any items that are not applicable to this PR. - [x] [Unit or functional tests](https://www.elastic.co/guide/en/kibana/master/development-tests.html) were updated or added to match the most common scenarios
kibanamachine
pushed a commit
to kibanamachine/kibana
that referenced
this issue
Nov 2, 2021
…oves 200 expect statements (elastic#116987) ## Summary e2e tests are still seeing flake with conflicts and it looks like it _might_ be with querying and not with inserting data. Hard to tell. This PR: * Adds more console logging when the response is not a 200 * Removes the 200 expect statement and hopes for the best but should blow up if it's not 200 in a different way and we will get the console logging statements. * Fixes one other flake with the matrix histogram having different counts. We have encountered this before and are applying the same fix which is to just have it check > 0. * This does fix the timeouts seen where 1 in every 1k rule runs, a rule will not fire until _after_ the 5 minute mark. The timeouts were seen when running the flake runner. Flake failures around `conflict`: elastic#116926 elastic#116904 elastic#116231 Not saying this is going to fix those yet, but it's the last 200 ok's we did an expect on, so it might if we are ignoring the conflict. If it fails again I am hopeful beyond hope that we get the body message and line number within the utilities to determine where/why we are getting these from time to time. It does look to fix the timeouts when a rule misfires and slows down the rate at which we continuously query for rule results. Failure around matrix histogram (The error messages are slightly different on CI each time): elastic#97365 Ran this with the flake runner across groups 11 and 12 100 times each and did not see the conflict crop up: https://buildkite.com/elastic/kibana-flaky-test-suite-runner/builds/128 https://buildkite.com/elastic/kibana-flaky-test-suite-runner/builds/129 The 1 failure in each of those runs were due to something on startup that prevented it from running. ### Checklist Delete any items that are not applicable to this PR. - [x] [Unit or functional tests](https://www.elastic.co/guide/en/kibana/master/development-tests.html) were updated or added to match the most common scenarios
FrankHassanabad
added a commit
to FrankHassanabad/kibana
that referenced
this issue
Nov 2, 2021
…oves 200 expect statements (elastic#116987) ## Summary e2e tests are still seeing flake with conflicts and it looks like it _might_ be with querying and not with inserting data. Hard to tell. This PR: * Adds more console logging when the response is not a 200 * Removes the 200 expect statement and hopes for the best but should blow up if it's not 200 in a different way and we will get the console logging statements. * Fixes one other flake with the matrix histogram having different counts. We have encountered this before and are applying the same fix which is to just have it check > 0. * This does fix the timeouts seen where 1 in every 1k rule runs, a rule will not fire until _after_ the 5 minute mark. The timeouts were seen when running the flake runner. Flake failures around `conflict`: elastic#116926 elastic#116904 elastic#116231 Not saying this is going to fix those yet, but it's the last 200 ok's we did an expect on, so it might if we are ignoring the conflict. If it fails again I am hopeful beyond hope that we get the body message and line number within the utilities to determine where/why we are getting these from time to time. It does look to fix the timeouts when a rule misfires and slows down the rate at which we continuously query for rule results. Failure around matrix histogram (The error messages are slightly different on CI each time): elastic#97365 Ran this with the flake runner across groups 11 and 12 100 times each and did not see the conflict crop up: https://buildkite.com/elastic/kibana-flaky-test-suite-runner/builds/128 https://buildkite.com/elastic/kibana-flaky-test-suite-runner/builds/129 The 1 failure in each of those runs were due to something on startup that prevented it from running. ### Checklist Delete any items that are not applicable to this PR. - [x] [Unit or functional tests](https://www.elastic.co/guide/en/kibana/master/development-tests.html) were updated or added to match the most common scenarios # Conflicts: # x-pack/test/detection_engine_api_integration/utils.ts
kibanamachine
added a commit
that referenced
this issue
Nov 2, 2021
…oves 200 expect statements (#116987) (#117141) ## Summary e2e tests are still seeing flake with conflicts and it looks like it _might_ be with querying and not with inserting data. Hard to tell. This PR: * Adds more console logging when the response is not a 200 * Removes the 200 expect statement and hopes for the best but should blow up if it's not 200 in a different way and we will get the console logging statements. * Fixes one other flake with the matrix histogram having different counts. We have encountered this before and are applying the same fix which is to just have it check > 0. * This does fix the timeouts seen where 1 in every 1k rule runs, a rule will not fire until _after_ the 5 minute mark. The timeouts were seen when running the flake runner. Flake failures around `conflict`: #116926 #116904 #116231 Not saying this is going to fix those yet, but it's the last 200 ok's we did an expect on, so it might if we are ignoring the conflict. If it fails again I am hopeful beyond hope that we get the body message and line number within the utilities to determine where/why we are getting these from time to time. It does look to fix the timeouts when a rule misfires and slows down the rate at which we continuously query for rule results. Failure around matrix histogram (The error messages are slightly different on CI each time): #97365 Ran this with the flake runner across groups 11 and 12 100 times each and did not see the conflict crop up: https://buildkite.com/elastic/kibana-flaky-test-suite-runner/builds/128 https://buildkite.com/elastic/kibana-flaky-test-suite-runner/builds/129 The 1 failure in each of those runs were due to something on startup that prevented it from running. ### Checklist Delete any items that are not applicable to this PR. - [x] [Unit or functional tests](https://www.elastic.co/guide/en/kibana/master/development-tests.html) were updated or added to match the most common scenarios Co-authored-by: Frank Hassanabad <frank.hassanabad@elastic.co>
FrankHassanabad
added a commit
that referenced
this issue
Nov 2, 2021
…oves 200 expect statements (#116987) (#117147) ## Summary e2e tests are still seeing flake with conflicts and it looks like it _might_ be with querying and not with inserting data. Hard to tell. This PR: * Adds more console logging when the response is not a 200 * Removes the 200 expect statement and hopes for the best but should blow up if it's not 200 in a different way and we will get the console logging statements. * Fixes one other flake with the matrix histogram having different counts. We have encountered this before and are applying the same fix which is to just have it check > 0. * This does fix the timeouts seen where 1 in every 1k rule runs, a rule will not fire until _after_ the 5 minute mark. The timeouts were seen when running the flake runner. Flake failures around `conflict`: #116926 #116904 #116231 Not saying this is going to fix those yet, but it's the last 200 ok's we did an expect on, so it might if we are ignoring the conflict. If it fails again I am hopeful beyond hope that we get the body message and line number within the utilities to determine where/why we are getting these from time to time. It does look to fix the timeouts when a rule misfires and slows down the rate at which we continuously query for rule results. Failure around matrix histogram (The error messages are slightly different on CI each time): #97365 Ran this with the flake runner across groups 11 and 12 100 times each and did not see the conflict crop up: https://buildkite.com/elastic/kibana-flaky-test-suite-runner/builds/128 https://buildkite.com/elastic/kibana-flaky-test-suite-runner/builds/129 The 1 failure in each of those runs were due to something on startup that prevented it from running. ### Checklist Delete any items that are not applicable to this PR. - [x] [Unit or functional tests](https://www.elastic.co/guide/en/kibana/master/development-tests.html) were updated or added to match the most common scenarios # Conflicts: # x-pack/test/detection_engine_api_integration/utils.ts Co-authored-by: Kibana Machine <42973632+kibanamachine@users.noreply.github.com>
After successfully running this test through the Flaky Test Runner for 100 iterations without any failures, it has been determined that the test is not truly flaky or failing. As a result, the ticket can be closed. |
Sign up for free
to join this conversation on GitHub.
Already have an account?
Sign in to comment
Labels
failed-test
A test failure on a tracked branch, potentially flaky-test
Team:Detections and Resp
Security Detection Response Team
A test failed on a tracked branch
First failure: CI Build - master
The text was updated successfully, but these errors were encountered: