Skip to content
New issue

Have a question about this project? Sign up for a free GitHub account to open an issue and contact its maintainers and the community.

By clicking “Sign up for GitHub”, you agree to our terms of service and privacy statement. We’ll occasionally send you account related emails.

Already on GitHub? Sign in to your account

[CI] SmokeTestMultiNodeClientYamlTestSuiteIT {create/60_refresh/refresh=wait_for} fails #24935

Closed
tlrx opened this issue May 29, 2017 · 6 comments

Comments

Projects
None yet
6 participants
@tlrx
Copy link
Member

commented May 29, 2017

The following test failed on CI (see this build):

org.elasticsearch.smoketest.SmokeTestMultiNodeClientYamlTestSuiteIT test {yaml=create/60_refresh/refresh=wait_for waits until changes are visible in search}

It doesn't reproduce locally and I don't know what happened. The trace shows that the wait_for option timed out but I have no clue why:

1> [2017-05-28T00:04:33,592][INFO ][o.e.s.SmokeTestMultiNodeClientYamlTestSuiteIT] [test {yaml=cat.aliases/10_basic/Column headers}]: after test
1> [2017-05-28T00:04:33,598][INFO ][o.e.s.SmokeTestMultiNodeClientYamlTestSuiteIT] [test {yaml=create/60_refresh/refresh=wait_for waits until changes are visible in search}]: before test
1> [2017-05-28T00:05:03,705][INFO ][o.e.s.SmokeTestMultiNodeClientYamlTestSuiteIT] [test {yaml=create/60_refresh/refresh=wait_for waits until changes are visible in search}]: after test
1> [2017-05-28T00:05:03,709][INFO ][o.e.s.SmokeTestMultiNodeClientYamlTestSuiteIT] Stash dump on failure [{
1>   "stash" : {
1>     "body" : null
1>   }
1> }]
ERROR   30.1s | SmokeTestMultiNodeClientYamlTestSuiteIT.test {yaml=create/60_refresh/refresh=wait_for waits until changes are visible in search} <<< FAILURES!
 > Throwable #1: java.lang.RuntimeException: Failure at [create/60_refresh:67]: listener timeout after waiting for [30000] ms
 > 	at __randomizedtesting.SeedInfo.seed([C14AB41884F38AA8:491E8BC22A0FE750]:0)
 > 	at org.elasticsearch.test.rest.yaml.ESClientYamlSuiteTestCase.executeSection(ESClientYamlSuiteTestCase.java:346)
 > 	at org.elasticsearch.test.rest.yaml.ESClientYamlSuiteTestCase.test(ESClientYamlSuiteTestCase.java:328)
 > 	at java.lang.Thread.run(Thread.java:748)
 > Caused by: java.io.IOException: listener timeout after waiting for [30000] ms
 > 	at org.elasticsearch.client.RestClient$SyncResponseListener.get(RestClient.java:660)
 > 	at org.elasticsearch.client.RestClient.performRequest(RestClient.java:219)
 > 	at org.elasticsearch.client.RestClient.performRequest(RestClient.java:191)
 > 	at org.elasticsearch.test.rest.yaml.ClientYamlTestClient.callApi(ClientYamlTestClient.java:169)
 > 	at org.elasticsearch.test.rest.yaml.ClientYamlTestExecutionContext.callApiInternal(ClientYamlTestExecutionContext.java:157)
 > 	at org.elasticsearch.test.rest.yaml.ClientYamlTestExecutionContext.callApi(ClientYamlTestExecutionContext.java:89)
 > 	at org.elasticsearch.test.rest.yaml.section.DoSection.execute(DoSection.java:221)
 > 	at org.elasticsearch.test.rest.yaml.ESClientYamlSuiteTestCase.executeSection(ESClientYamlSuiteTestCase.java:344)
 > 	... 37 more
@tlrx

This comment has been minimized.

Copy link
Member Author

commented May 29, 2017

@nik9000 I assigned it to you since you might have idea concerning the failing wait_for - feel free to reassign if needed.

@nik9000

This comment has been minimized.

Copy link
Contributor

commented May 29, 2017

I'll add it to my queue but not super high because I don't think this is common.

@tlrx

This comment has been minimized.

Copy link
Member Author

commented May 29, 2017

Should be fixed by #24926

@nik9000

This comment has been minimized.

Copy link
Contributor

commented May 29, 2017

Taking it off my list!

@talevy

This comment has been minimized.

@talevy talevy added v6.0.0 and removed v6.0.0-alpha2 labels Jul 12, 2017

@jasontedor

This comment has been minimized.

Copy link
Member

commented Jul 12, 2017

If the same test fails twice it does not mean that it has the same cause as the first failure. I think that it's best to not reopen an old issue for a test failure unless the cause is definitively the same as the initial failure that was being tracked. I'm inclined to think that the failure is something different here since this has not failed at all from May 29 until July 11. This leads me to believe that something new tickled this test and caused this failure. I logged into the slave to capture the build logs but sadly they are already destroyed. I'm going to close this issue. If this occurs again, please capture the cause and open a new issue (unless it's genuinely the same cause).

@jasontedor jasontedor closed this Jul 12, 2017

@colings86 colings86 added v6.0.0-beta1 and removed v6.0.0 labels Jul 31, 2017

Sign up for free to join this conversation on GitHub. Already have an account? Sign in to comment
You can’t perform that action at this time.