Skip to content
New issue

Have a question about this project? Sign up for a free GitHub account to open an issue and contact its maintainers and the community.

By clicking “Sign up for GitHub”, you agree to our terms of service and privacy statement. We’ll occasionally send you account related emails.

Already on GitHub? Sign in to your account

Mutli-threaded access to the connection pool problems #811

Closed
dealy663-zz opened this issue Jan 18, 2017 · 11 comments
Closed

Mutli-threaded access to the connection pool problems #811

dealy663-zz opened this issue Jan 18, 2017 · 11 comments

Comments

@dealy663-zz
Copy link

Hi Brett

I've been using HikariCP for a few months now, but recently have made my application pretty heavily multi-threaded. I've been having trouble with my connections getting closed when I don't expect them to be closed. I've been debugging this for a couple of days, and am left assuming that either there is some misunderstanding on my part of how this should work, or that something is wrong in the connection pool.

Basically my app receives numerous requests and each is handled on its own thread. I made each thread start off by getting a new connection and then calling the functions which interact with the DB. These functions can query and update the DB from this connection, and some operations may create a transaction. After the work is done I expect the connection to be released back to the pool.

When I started I had max active connections set to 32 and it generally worked, but occasionally saw some errors where it ran out of connections. While debugging this I kept reducing the max connections in order to make the problem happen more quickly. At this point I'm down to max of 2. It appears that either one of my worker threads is getting a connection that is already closed or that the connection is being closed before the worker has finished (maybe means 2 workers were handed references to the same connection).

My application can theoretically spawn hundreds of worker threads. I was expecting that each worker would block if there are no idle threads in the pool, and once one became available the next worker would awaken and perform its tasks. When the app first starts it usually spawns about 10 workers immediately within a second or so. In the current scenario the app workers complete 2 or 3 tasks before the SQLException connection is closed error appears.

Have I made some mistake in what I can expect from HikariCP regarding asking for multiple connections simultaneously? Or when they'll be closed, or how my threads will block when there are no idle connections available?

Thanks, Derek

@billoneil
Copy link

billoneil commented Jan 19, 2017

Your assumptions sound correct. If you had a leak you would be running out of connections. Since you are seeing some connections close early it sounds like connections are being shared across threads unintentionally. Do you have debug logging turned on? Are you using anything like CompletableFuture or any libraries that might be using threads behind the scenes? If I'm not mistaken some of the CompletableFuture apis actually kick off back to the thread pool under the hood which may cause issues with connections.

If there is any sample code you can share or a very minimal example that reproduces the issue it might be easier to track down.

@brettwooldridge
Copy link
Owner

Multithreaded access to Connections was deprecated in JDBC and is not supported by HikariCP either. It definitely sounds like multiple threads are interacting with a single Connection instance at once.

HikariCP is fast enough that you can obtain a Connection, execute SQL, and then return it back to the pool many times in the course of a request.

It is a Best Practice to only hold Connections in local variables, preferably in a try-with-resources block, or possibly passed on the stack, but never in a class member field. If you follow that pattern it is virtually impossible to leak a Connection or accidentally share across threads.

Not that it should matter, but I'm curious, are you running HikariCP as a fixed size pool, or are you specifying minimumIdle?

@dealy663-zz
Copy link
Author

I didn't include any source code 'cause its all written in Clojure and I didn't expect anyone to be able to deal with that.

There are a couple of libraries involved in the connection process and one of them may be allocating extra connections that I'm not aware of. And if that is happening while one of my threads is executing a task then that would explain why I'm running out of connections and all of my tasks are stopping. I don't have Hikari debug logging turned on. I guess I need to figure out how to do that.

Brett, I'm not intentionally trying to share connections among different threads. I've always thought that was a bad idea. However I am requesting new connections on mulitple threads, hopefully there is no problem with doing that. I guess that is the main thing I'm checking with you on.

If my connections are being shared among different threads, then I have no idea how that is happening. Its been a frustrating few days trying to get a handle on where things are going wrong.

@billoneil
Copy link

The best way I can think to debug this is using a logging library that logs the thread name. Keep the thread pool with a higher number of threads and sprinkle log statements throughout the logic. Try to run one operation at a time (possibly a few) and see if the loggers log the same thread name consistently or if the thread names change between queries. A bit primitive but might help.

If you are composing / mapping any Future type objects make sure that code executes on the same thread and doesn't return a new Future that could execute on a different thread. That could cause this issue.

@brettwooldridge
Copy link
Owner

@dealy663 HikariCP's DataSource is threadsafe with respect to getConnection() (among other methods).

@dealy663-zz
Copy link
Author

Thanks for the tips guys, I think I got everything working now.

@SerhiiKozachenko
Copy link

@dealy663 what was the issue, I think I may experience the same issue now, I am running jetty server with max 50 threads(by default) and after few hours I got spammed with errors like

clojure.java.jdbc/query                   jdbc.clj:  958
                                                clojure.java.jdbc/db-query-with-resultset                   jdbc.clj:  919
                                                      clojure.java.jdbc/prepare-statement                   jdbc.clj:  542
                            com.zaxxer.hikari.pool.HikariProxyConnection.prepareStatement  HikariProxyConnection.java
                                  com.zaxxer.hikari.pool.ProxyConnection.prepareStatement       ProxyConnection.java:  310
                                                   com.sun.proxy.$Proxy2.prepareStatement
                         com.zaxxer.hikari.pool.ProxyConnection$ClosedConnection$1.invoke       ProxyConnection.java:  469
java.sql.SQLException: Connection is closed
    errorCode: 0

@brettwooldridge
Copy link
Owner

It sounds like you are holding a connection out of the pool for a long time. Turn on DEBUG logging and enable the leakDetectionThreshold.

@SerhiiKozachenko
Copy link

@brettwooldridge Thanks, just FYI I was able to fix this by using with-db-connection fn wrappers

(jdbc/with-db-connection [conn {:datasource datasource}]
    (let [rows (jdbc/query conn "SELECT 0")]
      (println rows)))

@sw-enthusiast
Copy link

@dealy663 could you please post your answer on how you handled your issue?

@pufik
Copy link

pufik commented Dec 9, 2021

@brettwooldridge Is it possible to make sure that HikariCP will return the same Connection instance for the same thread?
Ex.:

Connection a = datasource.getConnection();
Connection b = datasource.getConnection();

assert a != b : "Connection is not equal";

Thanks in advance!

Sign up for free to join this conversation on GitHub. Already have an account? Sign in to comment
Labels
None yet
Projects
None yet
Development

No branches or pull requests

6 participants