Skip to content
New issue

Have a question about this project? Sign up for a free GitHub account to open an issue and contact its maintainers and the community.

By clicking “Sign up for GitHub”, you agree to our terms of service and privacy statement. We’ll occasionally send you account related emails.

Already on GitHub? Sign in to your account

time-box the docker client to avoid ever getting stuck #773

Merged
merged 4 commits into from
Jan 4, 2016

Conversation

ssalinas
Copy link
Member

It seems that we can get into an odd state when contacting the docker daemon. Even with the appropriate timeouts (default apache client in this case) set, we can still get into a situation where we try to contact the docker daemon then hang there forever. While the root problem lies in the docker daemon, it shouldn't stop the executor from being able to shut down properly.

This optionally creates a time limit for all calls using the docker daemon to avoid things hanging and causing an executor process that will simply wait forever doing nothing.

@jhaber
Copy link
Member

jhaber commented Nov 23, 2015

Would you still have a stuck thread? If so, doesn't this just hide a resource leak and eventually you could run out of threads or HTTP connections?

@ssalinas
Copy link
Member Author

From my testing while things were in the stuck state, interrupting the thread / killing the process will still stop it correctly. Maybe stuck was a bad word, the docker daemon call just never returns and never times out

@jhaber
Copy link
Member

jhaber commented Nov 23, 2015

Interesting, maybe we should PR the spotify docker client if it's not respecting timeouts

@ssalinas
Copy link
Member Author

spotify docker client is just using apache http client underneath, default timeouts for connect and read get set to 5 and 30 seconds. I think the current bug with the docker daemon is that it keeps the connection open in such a way that the timeouts do no get hit, thus the usage of TimeLimiter here instead (docker cli calls and calls to docker daemon from any source/client hang as well)

The bug is a rare case and the purpose of the PR is more to limit our executor from being pinned by it. If we can't launch something because docker is in a bad state, call it failed and move on.

@@ -124,6 +125,8 @@ private int getNumUsedThreads(SingularityExecutorTaskProcessCallable taskProcess
}
} catch (DockerException e) {
throw new ProcessFailedException(String.format("Could not get docker root pid due to error: %s", e));
} catch (UncheckedTimeoutException te) {
throw new ProcessFailedException("Timed out trying to reach docker daemon");
Copy link
Contributor

Choose a reason for hiding this comment

The reason will be displayed to describe this comment to others. Learn more.

could we include the timeout duration here? (i.e. Timed out trying to reach docker daemon after N seconds)

Copy link
Member Author

Choose a reason for hiding this comment

The reason will be displayed to describe this comment to others. Learn more.

Updated

@tpetr tpetr added this to the 0.4.8 milestone Dec 31, 2015
@@ -125,6 +126,8 @@ private boolean cleanDocker() {
} catch (ContainerNotFoundException e) {
Copy link
Contributor

Choose a reason for hiding this comment

The reason will be displayed to describe this comment to others. Learn more.

You should use the {} instead of String.format

Copy link
Member Author

Choose a reason for hiding this comment

The reason will be displayed to describe this comment to others. Learn more.

@wsorenson updated

wsorenson added a commit that referenced this pull request Jan 4, 2016
time-box the docker client to avoid ever getting stuck
@wsorenson wsorenson merged commit 9339962 into master Jan 4, 2016
@tpetr tpetr removed hs_qa labels Jan 4, 2016
@ssalinas ssalinas deleted the time_limited_docker branch February 9, 2016 20:06
Sign up for free to join this conversation on GitHub. Already have an account? Sign in to comment
Labels
None yet
Projects
None yet
Development

Successfully merging this pull request may close these issues.

4 participants