Skip to content
This repository has been archived by the owner on Mar 21, 2022. It is now read-only.

Connection pool is shut down #446

Closed
vania-pooh opened this issue May 31, 2016 · 11 comments · Fixed by #744
Closed

Connection pool is shut down #446

vania-pooh opened this issue May 31, 2016 · 11 comments · Fixed by #744

Comments

@vania-pooh
Copy link

vania-pooh commented May 31, 2016

Description

Getting Connection pool is shut down after some time. Seems to be a Jersey issue.

How to reproduce

We have a daemon process using the client like that:

    private void request() {
         try (DockerClient dockerClient = getDockerClient()) {
         } catch (Exception e) {
             //Log exception
         }
    }

    private static DockerClient getDockerClient() {
        final long TIMEOUT = 5 * 60 * 1000;
        return DefaultDockerClient.builder()
                .uri(URI.create("http://url/"))
                .readTimeoutMillis(TIMEOUT)
                .build();
    }

What do you expect

Client should continue to work.

What happened instead

Client connection pool is shut down.

Software:

  • docker version: 1.11
  • docker-client version: 3.6.8

Full backtrace

com.spotify.docker.client.DockerException: java.util.concurrent.ExecutionException: javax.ws.rs.ProcessingException: java.lang.IllegalStateException: Connection pool shut down
        at com.spotify.docker.client.DefaultDockerClient.propagate(DefaultDockerClient.java:1488)
        at com.spotify.docker.client.DefaultDockerClient.request(DefaultDockerClient.java:1441)
        at com.spotify.docker.client.DefaultDockerClient.removeContainer(DefaultDockerClient.java:661)
        at <some our classes>
Caused by: java.util.concurrent.ExecutionException: javax.ws.rs.ProcessingException: java.lang.IllegalStateException: Connection pool shut down
        at jersey.repackaged.com.google.common.util.concurrent.AbstractFuture$Sync.getValue(AbstractFuture.java:299)
        at jersey.repackaged.com.google.common.util.concurrent.AbstractFuture$Sync.get(AbstractFuture.java:286)
        at jersey.repackaged.com.google.common.util.concurrent.AbstractFuture.get(AbstractFuture.java:116)
        at com.spotify.docker.client.DefaultDockerClient.request(DefaultDockerClient.java:1439)
        ... 11 more
Caused by: javax.ws.rs.ProcessingException: java.lang.IllegalStateException: Connection pool shut down
        at org.glassfish.jersey.apache.connector.ApacheConnector.apply(ApacheConnector.java:481)
        at org.glassfish.jersey.apache.connector.ApacheConnector$1.run(ApacheConnector.java:491)
        at java.util.concurrent.Executors$RunnableAdapter.call(Executors.java:511)
        at java.util.concurrent.FutureTask.run(FutureTask.java:266)
        at jersey.repackaged.com.google.common.util.concurrent.MoreExecutors$DirectExecutorService.execute(MoreExecutors.java:299)
        at java.util.concurrent.AbstractExecutorService.submit(AbstractExecutorService.java:112)
        at jersey.repackaged.com.google.common.util.concurrent.AbstractListeningExecutorService.submit(AbstractListeningExecutorService.java:50)
        at jersey.repackaged.com.google.common.util.concurrent.AbstractListeningExecutorService.submit(AbstractListeningExecutorService.java:37)
        at org.glassfish.jersey.apache.connector.ApacheConnector.apply(ApacheConnector.java:487)
        at org.glassfish.jersey.client.ClientRuntime$2.run(ClientRuntime.java:177)
        at org.glassfish.jersey.internal.Errors$1.call(Errors.java:271)
        at org.glassfish.jersey.internal.Errors$1.call(Errors.java:267)
        at org.glassfish.jersey.internal.Errors.process(Errors.java:315)
        at org.glassfish.jersey.internal.Errors.process(Errors.java:297)
        at org.glassfish.jersey.internal.Errors.process(Errors.java:267)
        at org.glassfish.jersey.process.internal.RequestScope.runInScope(RequestScope.java:340)
        at org.glassfish.jersey.client.ClientRuntime$3.run(ClientRuntime.java:209)
        ... 5 more
Caused by: java.lang.IllegalStateException: Connection pool shut down
        at org.apache.http.util.Asserts.check(Asserts.java:34)
        at org.apache.http.pool.AbstractConnPool.lease(AbstractConnPool.java:184)
        at org.apache.http.impl.conn.PoolingHttpClientConnectionManager.requestConnection(PoolingHttpClientConnectionManager.java:251)
        at org.apache.http.impl.execchain.MainClientExec.execute(MainClientExec.java:175)
        at org.apache.http.impl.execchain.ProtocolExec.execute(ProtocolExec.java:184)
        at org.apache.http.impl.execchain.RetryExec.execute(RetryExec.java:88)
        at org.apache.http.impl.execchain.RedirectExec.execute(RedirectExec.java:110)
        at org.apache.http.impl.client.InternalHttpClient.doExecute(InternalHttpClient.java:184)
        at org.apache.http.impl.client.CloseableHttpClient.execute(CloseableHttpClient.java:71)
        at org.glassfish.jersey.apache.connector.ApacheConnector.apply(ApacheConnector.java:435)
        ... 21 more
@mattnworb
Copy link
Member

Is the stacktrace originating from code in the try block?

@vania-pooh
Copy link
Author

vania-pooh commented May 31, 2016

@mattnworb yes. We have a set of private methods being called inside this try-with-resources like the following:

    private void request() {
         try (DockerClient dockerClient = getDockerClient()) {
             methodOne(dockerClient);
             methodTwo(dockerClient);
             methodThree(dockerClient);
         } catch (Exception e) {
             //Log exception
         }
    }

So far as I can understand we're using default connection pool size = 100. Can I somehow enable debug logging to insert client output here?

@mattnworb
Copy link
Member

Is it possible that any of the methods are storing a reference to the DockerClient and calling methods on it from another thread after the initial thread has moved passed the try-with-resources block? From a quick glance, the apache httpclient only throws those exceptions after someone has called .close() on the HttpClient which DefaultDockerClient does in it's close() method (through the Jersey layer).

docker-client logs via slf4j, so you can see more info by configuring it, although there isn't a ton of logging. Apache HttpClient uses JCL which you'll have to configure as well.

@vania-pooh
Copy link
Author

@mattnworb we're using executors inside these methods. Will try to use one client per thread to eliminate this factor.

@mattnworb
Copy link
Member

@vania-pooh a quick way to verify if the problem is indeed that the main thread is exiting the try block and closing the DefaultDockerClient before the other threads are done would be to add some logging as the last line of the try.

Assuming this is the case, you will want to add some sort of synchronization between the executors and this main thread, so that the DockerClient is not closed until all of the work in your app is done.

@vania-pooh
Copy link
Author

@mattnworb we checked this capability and can state that we do awaitTermination and shutdown for executor service inside try block.

@mattnworb
Copy link
Member

Is it possible to produce a code sample that shows this in action, and/or steps to reproduce? I am not sure how to reproduce it on my own.

Have you tried to set a breakpoint on DefaultDockerClient.close() to see when it is being called and by whom?

@giftig
Copy link

giftig commented Mar 1, 2017

I've just had this issue in my unit tests when running two suites in parallel - it seems two distinct docker clients share a jersey pool, such that if I have one in each test suite and I close one of them at the end of my suite, it closes the underlying pool, breaking the other client.

@mattnworb
Copy link
Member

@giftig running two suites in parallel in two separate JVM instances? Is it possible to create a small project that uses the same version of Jersey etc that reproduces this problem?

@giftig
Copy link

giftig commented Mar 1, 2017

It's a single JVM; I'll see if I can reproduce it with a small test project soon; sorry, don't have a lot of time right now.

@walteryoung
Copy link

@mattnworb
I've been doing load testing on a service that relies on docker-client. This issue has been popping up a lot after running the service a short period of time, and accumulating around 50 containers. Unfortunately most of my logging information includes confidential IP so I can't upload it without a lot of editing.

Has there been any movement on understanding or solving this issue? Is there any information I could contribute to help?

Thanks

Sign up for free to subscribe to this conversation on GitHub. Already have an account? Sign in.
Labels
None yet
Projects
None yet
Development

Successfully merging a pull request may close this issue.

4 participants