Skip to content
New issue

Have a question about this project? Sign up for a free GitHub account to open an issue and contact its maintainers and the community.

By clicking “Sign up for GitHub”, you agree to our terms of service and privacy statement. We’ll occasionally send you account related emails.

Already on GitHub? Sign in to your account

Proposal: Optionally allow for pubsub connections to be tracked in pool #2867

Open
vtermanis opened this issue Jan 15, 2024 · 0 comments
Open

Comments

@vtermanis
Copy link

vtermanis commented Jan 15, 2024

(Let me know if improvement/feature suggestions are better placed to be in discussions - I wasn't sure. Note: There is also a proposed implementation linked within.)

Discussed version: v9.4.0

Dear go-redis team,

We're using your client library and Redis for fan-in (RPUSH + BLPOP) and fan-out (PUBLISH + PSUBSCRIBE) operations. The listening ends of these operations can be long-running and the total number of fan-in & fan-out listeners can vary.

For the short/non-blocking operations and BLPOP the pool is ideal: We set PoolSize to ensure that we limit the total number of connections (so that among multiple services we're strictly below Redis' own maxclients limit).
With PUBSCRIBE however ensuring a ceiling of total Redis connections is harder:

  1. We can set MaxActiveConns in addition to PoolSize, but the effect this has appears to be lacking:

    • Whilst there are fewer than MaxActiveConns non-pubsub connections, one can create an indefinite number of pubsub connections (via newConn())
    • When MaxActiveConns has been reached, one can no longer create any new connections
  2. We can use our own semaphore to restrict the total number of pubsub connections (a level above go-redis) but this doesn't work well for obvious reasons:

    • The pool keeps idles connections for re-use (MaxIdleConns). Even if this is still configured to be a subset of PoolSize, this still means that the total number of available connections cannot be shared between pubsub & non-pubsub.
    • One can instead set MaxIdleConns to zero but that of course has a signficant performance hit since the pool then is only used as a semaphore and each new request requires a new connection to be created.
  3. The Dialer (and other hooks) do not provide enough context to allow for external pooling (especially since the internal logic around whether to keep a connection alive or not is not visible there)


I appreciate that the point of not having pubsub use connections from the pool is because those connections have to be closed after use. I believe it would however be really useful to be able to:

  • Limit the total Redis connection count across all operations (pubsub & non-pubsub) and
  • Share the total number of theoretically available connections across all operations

Would you be willing to consider an optional (off by default) approach which shares the pool as described above? In essence this could mean e.g.:

  1. Expose a new option (e.g. PubsubFromPool) which is off by default. When set, the following points apply
  2. Connections for pubsub are retrieved via pool.Get() (as with other operations)
  3. Finished pubsub connections are released via pool.Remove() (i.e. not returned)
  4. Does not apply to cluster client type (should it?)

The trade-offs would be (up to the user to choose), with this option enabled:

  • 🟢 PoolSize sets a hard limit for everything
  • 🟢 The mix of pubsub & non-pubsub (long running) operations does not have to be known in advance: Any combination (up to PoolSize) can be used
  • 🔴 Since pubsub connections are now taken from the pool, they reduce the idle pool size (for re-use) and require for extra connections to be created. (Note This should only have an impact where a lot of short-lived subscribe calls are made.)

If you were interest, we'd be willing to contribute such an option (with the addition of some tests, of course).

Sign up for free to join this conversation on GitHub. Already have an account? Sign in to comment
Labels
None yet
Projects
None yet
Development

No branches or pull requests

1 participant