Skip to content
New issue

Have a question about this project? Sign up for a free GitHub account to open an issue and contact its maintainers and the community.

By clicking “Sign up for GitHub”, you agree to our terms of service and privacy statement. We’ll occasionally send you account related emails.

Already on GitHub? Sign in to your account

Add maxTtl do pool objects #96

Merged
merged 7 commits into from
Feb 21, 2019
Merged

Add maxTtl do pool objects #96

merged 7 commits into from
Feb 21, 2019

Conversation

akleiman
Copy link
Contributor

Hi there. We talked briefly yesterday (21/02) about some things I'd like to see in jasync. I had mentioned creating a PR to allow user-defined properties for postgres, and had mentioned the need to have a maxTtl for the pool. Shortly after we talked we hit a serious problem in production and maxTtl became much more urgent, so I prepared this PR. I'm willing to discuss and change any aspect of it, of course.

It was straightforward to implement and test; I chose to have two checks, one in sendAvailableItemsToTest, but also to have one in validate (so it gets triggered on GiveBack and Take).

The one thing I wasn't entirely sure about was default values; I chose to treat negative values of maxTtl as not having a TTL at all, as the previous behavior of the library, and chose to default to -1 if not specified, so as to not produce any surprising results on version upgrade.

@codecov
Copy link

codecov bot commented Feb 21, 2019

Codecov Report

Merging #96 into master will decrease coverage by 0.03%.
The diff coverage is 71.42%.

Impacted file tree graph

@@             Coverage Diff              @@
##             master      #96      +/-   ##
============================================
- Coverage     77.74%   77.71%   -0.04%     
- Complexity      990      993       +3     
============================================
  Files           257      258       +1     
  Lines          3640     3653      +13     
  Branches        484      487       +3     
============================================
+ Hits           2830     2839       +9     
- Misses          571      573       +2     
- Partials        239      241       +2
Impacted Files Coverage Δ Complexity Δ
...com/github/jasync/sql/db/pool/PoolConfiguration.kt 100% <100%> (ø) 11 <1> (+1) ⬆️
...github/jasync/sql/db/pool/MaxTtlPassedException.kt 100% <100%> (ø) 1 <1> (?)
...com/github/jasync/sql/db/ConcreteConnectionBase.kt 100% <100%> (ø) 7 <1> (ø) ⬇️
...ithub/jasync/sql/db/ConnectionPoolConfiguration.kt 64.76% <40%> (-1.24%) 19 <0> (+1)
.../github/jasync/sql/db/pool/ActorBasedObjectPool.kt 85.32% <83.33%> (-0.05%) 18 <0> (ø)

Continue to review full report at Codecov.

Legend - Click here to learn more
Δ = absolute <relative> (impact), ø = not affected, ? = missing data
Powered by Codecov. Last update eef011d...5590b50. Read the comment docs.

@andy-yx-chen
Copy link
Contributor

Having a maxttl does not make sense to me, all you want to do is to prevent some bad connection from returning from pool. One simple way to solve the problem is to do a ping, the ping query could be configured, or nothing can prevent you default the ping query as "select 1 as alive", if the ping returns failure, then drop the connection from the pool, that would be better, cutting connections by a hard deadline is not a good idea

@akleiman
Copy link
Contributor Author

akleiman commented Feb 21, 2019

@andy-yx-chen It's not bad connections; we want to force the pool to recycle connections, even if they're good. Our use-case is: we have a query-heavy app and are running PG in an auto-scale group, behind a load-balancer. When the database CPU starts to go up, we spin up a new instance and want the LB to start sending traffic to the new instance. But under heavy load this doesn't happen, because even with an idle timeout, the connections are never idle. So we want the pool to let connections go when they're older than a certain age, so it can attempt to reconnect.

This is something we had when we were using JDBC with HikariCP but lost in the move to jasync.

@andy-yx-chen
Copy link
Contributor

@akleiman sounds really fair then, I did not consider load balancer case just now

Copy link
Contributor

@oshai oshai left a comment

Choose a reason for hiding this comment

The reason will be displayed to describe this comment to others. Learn more.

Thanks for the PR! I added a couple of review comments and will merge it after fixes.

@@ -56,6 +57,7 @@ data class ConnectionPoolConfiguration @JvmOverloads constructor(
val username: String = "dbuser",
val password: String? = null,
val maxActiveConnections: Int = 1,
val maxTtl: Long = -1,
Copy link
Contributor

Choose a reason for hiding this comment

The reason will be displayed to describe this comment to others. Learn more.

I prefer calling it maxConnectionTtl. In addition, it is better to use nullable type with null as default to represent "no value". Please also add validation on allowed values (see below)

Copy link
Contributor Author

Choose a reason for hiding this comment

The reason will be displayed to describe this comment to others. Learn more.

I'll call it maxObjectTtl in PoolConfiguration and maxConnectionTtl in the ConnectionPoolConfiguration/Builder, ok?

Copy link
Contributor

Choose a reason for hiding this comment

The reason will be displayed to describe this comment to others. Learn more.

👍

@@ -496,6 +498,10 @@ private class ObjectPoolActor<T : PooledObject>(
when (tried) {
is Failure -> throw tried.exception
}
val age = System.currentTimeMillis() - a.creationTime
if (configuration.maxTtl > 0 && age > configuration.maxTtl) {
throw ObjectAgedOutException(a, age, configuration.maxTtl)
Copy link
Contributor

Choose a reason for hiding this comment

The reason will be displayed to describe this comment to others. Learn more.

I don't think we should throw an exception in case of ttl passed, just destroy the item on validation

Copy link
Contributor Author

@akleiman akleiman Feb 21, 2019

Choose a reason for hiding this comment

The reason will be displayed to describe this comment to others. Learn more.

Should validate() call dispose destroy() directly then?

Copy link
Contributor Author

Choose a reason for hiding this comment

The reason will be displayed to describe this comment to others. Learn more.

Calling item.destroy() directly, let me know if that's not what you had in mind.

Copy link
Contributor

Choose a reason for hiding this comment

The reason will be displayed to describe this comment to others. Learn more.

Looking at the code again it seems that you did the right thing. The only comment then is to call the exception MaxTtlPassedException

Copy link
Contributor Author

@akleiman akleiman Feb 21, 2019

Choose a reason for hiding this comment

The reason will be displayed to describe this comment to others. Learn more.

👍
Done, renamed, brought the exception back.

* Removed ObjectAgedOutException
* Calling destroy() directly from validate method, if object has been
aged out
* Renamed properties to maxConnectionTtl in ConnectionPoolConfiguration
and ConnectionPoolConfigurationBuilder and to maxObjectTtl in
PoolConfiguration
* Made ttl property nullable (with default null) and added non-negative
validation
@oshai oshai merged commit 2a8463d into jasync-sql:master Feb 21, 2019
@oshai
Copy link
Contributor

oshai commented Feb 21, 2019

Thanks again for the PR! I will do some more cleanups before release.
One thing that I want to test also is that it is not throwing an exception to the caller executing the query itself (which I suspect it is). If that is the case I will change the code, because the cleanup should not affect the one using the connection.

@akleiman
Copy link
Contributor Author

I ran some tests myself; if you set maxTTL very low (5-10 ms) then you'll see exceptions, since it'll age out almost as soon as you get the connection. If you raise it to something more reasonable (I went to 30s) then I didn't see exceptions.

@oshai
Copy link
Contributor

oshai commented Feb 21, 2019

I am pretty sure you'll be able to see that on a busy pool in any configuration. I have some code changes ready, just fixing the tests now.

Sign up for free to join this conversation on GitHub. Already have an account? Sign in to comment
Labels
None yet
Projects
None yet
Development

Successfully merging this pull request may close these issues.

3 participants