Skip to content
New issue

Have a question about this project? Sign up for a free GitHub account to open an issue and contact its maintainers and the community.

By clicking “Sign up for GitHub”, you agree to our terms of service and privacy statement. We’ll occasionally send you account related emails.

Already on GitHub? Sign in to your account

allow for the StartupCheckStrategy timeout value to be configurable. #1308

Merged
merged 7 commits into from
Apr 15, 2019

Conversation

mimfgg
Copy link
Contributor

@mimfgg mimfgg commented Mar 15, 2019

in case of big docker images (Elasticsearch 6.6.0 is 808MB for example) the hardcoded 30s timeout makes a lot of our builds fail the first time they run.
We fixed this locally, but it would be nice to have it as a parameter on the parent class.

@ftardif
Copy link
Contributor

ftardif commented Mar 15, 2019

I think I am having a similar problem, could you share the stacktrace you are getting?

@ftardif
Copy link
Contributor

ftardif commented Mar 15, 2019

can we make sure that this timeout can also be configured when using DockerComposeContainer ?

@mimfgg
Copy link
Contributor Author

mimfgg commented Mar 15, 2019

you should get something like:

Caused by: org.rnorth.ducttape.RetryCountExceededException: Retry limit hit with exception
 	at org.rnorth.ducttape.unreliables.Unreliables.retryUntilSuccess(Unreliables.java:83)
 	at org.testcontainers.containers.GenericContainer.doStart(GenericContainer.java:214)
 	... 57 common frames omitted
 Caused by: org.testcontainers.containers.ContainerLaunchException: Could not create/start container
 	at org.testcontainers.containers.GenericContainer.tryStart(GenericContainer.java:286)
 	at org.testcontainers.containers.GenericContainer.lambda$doStart$0(GenericContainer.java:216)
 	at org.rnorth.ducttape.unreliables.Unreliables.retryUntilSuccess(Unreliables.java:76)
 	... 58 common frames omitted
 Caused by: org.testcontainers.containers.ContainerLaunchException: Timed out waiting for URL to be accessible (http://10.187.8.69:32771/ should return HTTP 200)
 	at org.testcontainers.containers.wait.strategy.HttpWaitStrategy.waitUntilReady(HttpWaitStrategy.java:197)
 	at org.testcontainers.containers.wait.strategy.AbstractWaitStrategy.waitUntilReady(AbstractWaitStrategy.java:35)
 	at org.testcontainers.containers.GenericContainer.waitUntilContainerStarted(GenericContainer.java:591)
	at org.testcontainers.containers.GenericContainer.tryStart(GenericContainer.java:263)

DockerComposeContainer seems to be working a bit differently than the GenericContainer.

@rnorth
Copy link
Member

rnorth commented Mar 18, 2019

I think this looks like a good improvement to me, thank you @mimfgg.

However, I'm a little confused:

  • Your change is for StartupCheckStrategy, which controls the 'is it running yet?' logic

  • The stack trace in your comment as about Wait Strategies - these are run after startup, and control the decision for 'given that it's running, is it listening?'

I'd be happy to accept this change, but if this stack trace is your only problem I'd worry that this PR might not help you! Please can you confirm?

@mimfgg
Copy link
Contributor Author

mimfgg commented Mar 19, 2019

yes, I picked the wrong log... however I don't really understand something, the actual trace we have is:

[2019-03-14T13:29:58.945Z] org.testcontainers.containers.ContainerLaunchException: Timed out waiting for URL to be accessible (http://10.187.8.67:32771/ should return HTTP 200)
[2019-03-14T13:29:58.945Z] 	at org.testcontainers.containers.wait.strategy.HttpWaitStrategy.waitUntilReady(HttpWaitStrategy.java:197)
[2019-03-14T13:29:58.945Z] 	at org.testcontainers.containers.wait.strategy.AbstractWaitStrategy.waitUntilReady(AbstractWaitStrategy.java:35)
[2019-03-14T13:29:58.945Z] 	at org.testcontainers.containers.GenericContainer.waitUntilContainerStarted(GenericContainer.java:591)
[2019-03-14T13:29:58.945Z] 	at org.testcontainers.containers.GenericContainer.tryStart(GenericContainer.java:263)
[2019-03-14T13:29:58.945Z] 	at org.testcontainers.containers.GenericContainer.lambda$doStart$0(GenericContainer.java:216)
[2019-03-14T13:29:58.945Z] 	at org.rnorth.ducttape.unreliables.Unreliables.retryUntilSuccess(Unreliables.java:76)
[2019-03-14T13:29:58.945Z] 	at org.testcontainers.containers.GenericContainer.doStart(GenericContainer.java:214)
[2019-03-14T13:29:58.945Z] 	at org.testcontainers.containers.GenericContainer.start(GenericContainer.java:203)

which doesn't match the source I get for version 1.10.5 we have as a dependency:
[INFO] | | +- org.testcontainers:elasticsearch:jar:1.10.5:test
There is no doStart method at line 214

... And I now see the -retag version. I think that some of our servers have the original 1.10.5 cached not the retag. I'll bump to 1.10.7 and do more debugging.

@mimfgg
Copy link
Contributor Author

mimfgg commented Mar 19, 2019

I don't really manage to reproduce this behaviour in 1.10.7. You can merge this PR or just close it if this property doesn't really need to be set ... our artifactory just has a 1.10.5 (not retagged) version cached, I think we'll just move on :)

@rnorth rnorth merged commit ed0db16 into testcontainers:master Apr 15, 2019
@rnorth rnorth added this to the next milestone Apr 15, 2019
@rnorth
Copy link
Member

rnorth commented Apr 16, 2019

Released in 1.11.2!

Sign up for free to join this conversation on GitHub. Already have an account? Sign in to comment
Labels
None yet
Projects
None yet
Development

Successfully merging this pull request may close these issues.

None yet

3 participants