Skip to content
New issue

Have a question about this project? Sign up for a free GitHub account to open an issue and contact its maintainers and the community.

By clicking “Sign up for GitHub”, you agree to our terms of service and privacy statement. We’ll occasionally send you account related emails.

Already on GitHub? Sign in to your account

#1230 Disposing ConnectionPool that are not actively used under LRU policy. #1259

Closed
wants to merge 1 commit into from

Conversation

jun-lee
Copy link

@jun-lee jun-lee commented Aug 11, 2020

Based on the issue(#1230), I'm thinking that it can be on the reactor-netty application layer as well, instead of adding consumer logic into the reactor-pool.

@pivotal-issuemaster
Copy link

@jun-lee Please sign the Contributor License Agreement!

Click here to manually synchronize the status of this Pull Request.

See the FAQ for frequently asked questions.

@pivotal-issuemaster
Copy link

@jun-lee Thank you for signing the Contributor License Agreement!

@violetagg
Copy link
Member

violetagg commented Aug 11, 2020

@jun-lee With issue #1230 do you want to remove the connections that reached their TTL within a connection pool or to remove a connection pool from the map with all connections pool?

The feature request in Reactor Pool is for removing the connections that reached their TTL within a connection pool. While with this PR you want to remove a connection pool from the map with all connection pools.

How do you guarantee that the connection pool, that's going to be destroyed, does not have active connections at the moment?

@jun-lee
Copy link
Author

jun-lee commented Aug 11, 2020

@violetagg
Because I don't see how the resources referenced by PoolKey can be garbage collected unless the entry is removed from the map after it has established. Let's say a ConnectionPool is holding 10 subsequent connections that reached to TTL so that a process of reactor-pool evicted all the connections belong to the Connection Pool, so now we have empty ConnectionPool on the channelPools Map. However, the PoolKey and cached resource in the Connection Pool that I shared the Screenshot on the issue would not be removed.

In my application, if I'm using enough numbers for the limit, removing entry by LRU policy is a kind of the desired feature, rather than keeping them a long time or causing another heavy evicting operation that checks that all the connections in a pool have reached to TTL. I have tested that the simultaneous new request that establishing a new ConnectionPool to the destroying connection pool was not causing any problem either.

@violetagg
Copy link
Member

violetagg commented Aug 11, 2020

@violetagg
Because I don't see how the resources referenced by PoolKey can be garbage collected unless the entry is removed from the map after it has established. Let's say a ConnectionPool is holding 10 subsequent connections that reached to TTL so that a process of reactor-pool evicted all the connections belong to the Connection Pool,

We still do not have this background eviction from Reactor Pool, so currently the connections are checked for TTL only when they are released/acquired.

so now we have empty ConnectionPool on the channelPools Map. However, the PoolKey and cached resource in the Connection Pool that I shared the Screenshot on the issue would not be removed.

Let say that the connection pool is empty then the cleanup from the map should not be done based on the eldest but based on InstrumentedPool.metrics().allocatedSize() == 0

In my application, if I'm using enough numbers for the limit, removing entry by LRU policy is a kind of the desired feature, rather than keeping them a long time or causing another heavy evicting operation that checks that all the connections in a pool have reached to TTL. I have tested that the simultaneous new request that establishing a new ConnectionPool to the destroying connection pool was not causing any problem either.

Let say that you destroy a connection pool with connections (active or idle). Then you are right this will not cause problems as the next request for a connection will create it again, BUT then you have to pay the price for establishing again a new connection to the remote host.

What if your eldest connection pool is the most heavy used at the moment?

@jun-lee
Copy link
Author

jun-lee commented Aug 11, 2020

Yeah this PR is not supposed to be merged. We need to hit the map by calling get() on the acquiring Operation as well to put the most heavy one at the recently accessed position. I just want to point that the root cause of my application was the growing entities of the map holding the connection pools. If there is any way to clean up the connection pool and related PoolKey that holding InetAddress of the remote host, then it would work for my application.

@violetagg
Copy link
Member

@jun-lee So from what I understood you connect to many different remote hosts, is this correct?
We provide this method reactor.netty.resources.ConnectionProvider#disposeWhen will it help?

@jun-lee
Copy link
Author

jun-lee commented Aug 11, 2020

Yes, that's what I have just found and I now can handle the LRU from my applicaion side. And it works well. Thanks for the help.

@jun-lee jun-lee closed this Aug 11, 2020
Sign up for free to join this conversation on GitHub. Already have an account? Sign in to comment
Labels
None yet
Projects
None yet
Development

Successfully merging this pull request may close these issues.

3 participants