New issue
Have a question about this project? Sign up for a free GitHub account to open an issue and contact its maintainers and the community.
By clicking “Sign up for GitHub”, you agree to our terms of service and privacy statement. We’ll occasionally send you account related emails.
Already on GitHub? Sign in to your account
JDBC connection leak on JDBC pool when using Oracle AQ in combination with DefaultMessageListenerContainer [DATAJDBC-8] #265
Comments
Koen Serneels commented Can someone change the priority to high please? |
Koen Serneels commented Remark: my explanation about the CloseSuppressingConnectionProxy is not correct of course. So in this case (no transaction) it seems ok that the proxy is not applied, it seems also correct that the close is propagated to the underlying connection. |
Thomas Risberg commented Not having seen this run, I'm guessing that what happens is that the "refreshConnectionUntilSuccessful" method attempts to open another connection, but this throws an exception so the JmsUtils.closeConnection(con) call is not executed for this connection and after the logging is done another connection attempt is made. It's possible that the Oracle JDBC connection is still open at this point, so this would in this case lead to an exhausted pool at some point. If you are still able to reproduce this, could you enable debugging for the DMLC and post the debug logs. Thanks, |
Koen Serneels commented Yes, this is still an issue. We had a configuration issue (a Q did not exist). Because of this, reading a message from the Q failed. Now, when 'refreshConnectionUntilSuccessful' is called, it tries to perform a basic check to find out if the connection to the underlying JMS provider is ok. everything ok at this point, the connection is obtained/made without issues Next, it immediately closes the obtained connection. So, the 'close' that JmsUtils issues is called on the physically database connection. One solution would be to create a connection proxy that is applied when OUTSIDE a transaction. |
Thomas Risberg commented Thanks for the feedback. I'm setting up a test to reproduce this - could you post your connection configuration. Thanks, P.S. fixed your edit problem with the strikethrough :) |
Koen Serneels commented Ok, thanks. <bean id="oracleNativeJdbcExtractor" class="org.springframework.jdbc.support.nativejdbc.SimpleNativeJdbcExtractor"/> <bean id="sessionFactory" .......
When running the test you should see the number of connections in use rising on the pool, however, if you check the real active connections on oracle (querying v$session or something) you'll see it should remain steady. |
Thomas Risberg commented Haven't been able to duplicate this quite yet. Noticed you are using a DBCP pool. Could you try using an OracleDataSource pool -
Also can you either post or email me your configuration for the messageListenerContainer. My email is trisberg AT vmware.com Thanks, |
Thomas Risberg commented I was able to configure my test to reproduce this. You are absolutely correct, we need to provide a proxy that will propagate the close call to the pool instead of using the unwrapped raw connection. Using the Oracle pool implementation bypasses this issue since we don't have to unwrap the pooled connection to use it with the AQ JMS connection factory. -Thomas |
Thomas Risberg commented This is now fixed by wrapping any unwrapped native connections used outside of an existing Tx in a proxy that delegates close calls to the original pool connection. Thanks for reporting this issue. -Thomas |
Koen Serneels opened DATAJDBC-8 and commented
We are using the Oracle AQ support from spring-data to have both JMS and JDBC over the same datasource, with local transactions instead of XA.
The big picture of our setup is basicly what is described in the reference manual:
Everything is working, but after a couple of minutes we get indications that the JDBC connection pool was exhausted and everything got stuck. In the JDBC connection pool monitor, we could see that all connections where in use: so something was clearly leaking connections. Since we were using the glassfish internal JDBC pool, we tried replacing the glassfish JDBC pool with commons DBCP, directly connecting from Spring to the db, bypassing any glassfish pool service. This did not resolve the problem: we got the exact same issue: exhausted pool after some minutes.
Through further investigation we learned that it was not a normal leak, since the physical connections to the database remained steady. So even if the container/DBCP showed us that 54 connections (of for instance max 100) were in use, we only saw for instance 6 connections to the database at a given time (6 x DefaultMessageListenerContainer on empty queues, just 1 thread checking for msg'es). Even more interesting was that when the pool got exhausted, and everything got stuck, we (immediately after) saw NO more open connections to the database.
So, this told us that their was no 'physical' leak, as the connections were all managed/closed properly.
It seemed that the pool was never informed that a connection is closed, and so never marked the connection as 'released' (allthough the connection did physically close). This seemed weird, as the transaction manager was doing its work correctly: we saw connections being 'returned' to the pool at the end of the transaction.
After some more searching, we found out that it was only some of the DefaultMessageListenerContainers that were leaking (the other all returned the connections at TX commit). The difference was that the leaking containers were 'in error': its corresponding Queue was not defined within Oracle. It is in these error scenario's that following connection leaking scenario is triggered in DefaultMessageListenerContainer:
Now, here it becomes interesting. At L859 the javax.jms.Connection is obtained through the oracle AQjmsConnectionFactory.
The AQjmsConnectionFactory was created by AqJmsFactoryBeanFactory, passing in our configured DataSource, but wrapped in a AqJmsFactoryBeanFactory$TransactionAwareDataSource. So, in the end AQjmsConnectionFactory internally obtains a new javax.sql.Connection from AqJmsFactoryBeanFactory$TransactionAwareDataSource (delegating the the JDBC connection pool). However, at L146 of AqJmsFactoryBeanFactory$TransactionAwareDataSource you see this:
[code]
if (TransactionSynchronizationManager.isActualTransactionActive()) {
if (logger.isDebugEnabled()) {
logger.debug("Using Proxied JDBC Connection [" + conToUse + "]");
}
return getCloseSuppressingConnectionProxy(conToUse);
}
[/code]
The "if" is in this case NOT executed and the connection obtained thus NOT proxied, since there was NO transaction started in refreshConnectionUntilSuccessful (and there was also none active). So, when at L860 of refreshConnectionUntilSuccessful the close() is propagated, it is executed on the RAW oracle connection (T4CConnection.logOff()). Making the connection physically disconnect.
However, [incorrect - see comment]since it was not proxied,[/incorrect] there was no call to the pool indicating that the connection was released! This brings the pool in an artificial state thinking its connections are somewhere in use while they are not.
Our temporary fix will be to customize DefaultMessageListenerContainer by overriding refreshConnectionUntilSuccessful() to do nothing.
But I suspect a better/cleaner fix is possible. Can this be looked at?
Affects: Ext 1.0 M1
Referenced from: commits spring-attic/spring-data-jdbc-ext@1fc1c7a
1 votes, 2 watchers
The text was updated successfully, but these errors were encountered: