Skip to content
This repository has been archived by the owner on Jun 29, 2023. It is now read-only.

There is no way to remove closed JedisPool from RedisSenderPoolProvider#standalonePools #246

Closed
3 tasks
herrlado opened this issue Jul 15, 2020 · 5 comments
Closed
3 tasks
Labels
type: bug A general bug
Milestone

Comments

@herrlado
Copy link

Make sure that:

  • You have read the contribution guidelines.
  • gelf-logstash-1.14.0
  • We reconfigure log4j settings during the runtime with new PropertyConfigurator().doConfigure(reader,
    LogManager.getLoggerRepository());The Commons pools gets closed inbiz.paluch.logging.gelf.intern.sender.GelfREDISSender#closebutRedisSenderPoolProviderstill holds the closedJedisPoolWhen now the log4j properties is reconfigured RedisSenderPoolProvider` delivers a closed JedisPool.

On Closing JedisPool pool, it also must be removed from RedisSenderPoolProvider.

@mp911de
Copy link
Owner

mp911de commented Jul 21, 2020

Thanks for bringing up that issue. The problem is here that RedisSenderPoolProvider is a shared/static holder for various kinds of JedisPool instances. You can have multiple appenders that consume pools. If we close a pool because an appender gets closed, all other active appenders will see a closed JedisPool.

I think we need something like a refcounter to track usage and close the actual pool. Instead of shutting down the Pool, we'd rather decrement the refcounter and if it reaches zero, then we can safely remove the pool from RedisSenderPoolProvider.

@mp911de mp911de added the type: bug A general bug label Jul 21, 2020
@herrlado
Copy link
Author

herrlado commented Jul 21, 2020

ok, but if the JedisPool gets closed. how can other appenders use the closed one? It is not any more usable, no? I have made a fix as a pull request, to demonstrate the solution. On JedisPool#destroy, it gets removed from the RedisSenderPoolProvider

b9563a2

@mp911de
Copy link
Owner

mp911de commented Jul 24, 2020

Thanks a lot. Reiterating on the issue, we need to track refcount, otherwise, other appenders will see a closed pool. I have a draft locally to fix the issue. Do you mind if we close #247 in favor of my refcnt implementation?

@herrlado
Copy link
Author

yes ok in understand. Sure. Thanks

mp911de added a commit that referenced this issue Jul 24, 2020
We now correctly handle the lifecycle for Jedis Pool instances by tracking their usage count (refCnt) and disposing resources.
Previously, two appenders could obtain the same pool and if one appender was closed, the pool was also disposed for the other appender that didn't request shutdown. Also, disposed pool remained referenced so subsequent retrievals to the same pool (as per URI) returned a destroyed pool so that the sender wasn't able to connect to Redis anymore.

Pool retrieval and disposal is now synchronized to avoid races and to ensure a proper state.
@mp911de
Copy link
Owner

mp911de commented Jul 24, 2020

The fix is now in place. Thanks for your help!

@mp911de mp911de closed this as completed Jul 24, 2020
@mp911de mp911de added this to the 1.14.1 milestone Jul 24, 2020
Sign up for free to subscribe to this conversation on GitHub. Already have an account? Sign in.
Labels
type: bug A general bug
Projects
None yet
Development

No branches or pull requests

2 participants