Skip to content
New issue

Have a question about this project? Sign up for a free GitHub account to open an issue and contact its maintainers and the community.

By clicking “Sign up for GitHub”, you agree to our terms of service and privacy statement. We’ll occasionally send you account related emails.

Already on GitHub? Sign in to your account

Using start_subscribe #95

Closed
tgy opened this issue Mar 27, 2016 · 18 comments
Closed

Using start_subscribe #95

tgy opened this issue Mar 27, 2016 · 18 comments

Comments

@tgy
Copy link

tgy commented Mar 27, 2016

When using

subscriber = yield from connection_pool.start_subscribe()

From what I understand, the connection_pool is set to be in pub/sub mode and methods to get or set keys won't be allowed for that particular connection_pool since it can only do pub/sub stuff.

What I don't understand however is if the subscriber is attached to only one connection or the whole bunch of connections in the connection_pool.

Does anyone have a light?

@adamrothman
Copy link

Entering pub/sub mode is something that a connection does. When you call start_subscribe, the subscriber you get back is bound to one connection that is in this pub/sub mode.

@cderwin
Copy link

cderwin commented Sep 7, 2016

From what I understand, once you call start_subscribe there's no way get "out" of subscription mode (at least it's not documented and I could not find it perusing the source), so your connection pool has permanently lost a connection. Is there any way around this?

If not, I would be more than happy to implement a method in the Protocol and/or Subscription classes that sets protocol._in_pubsub = False and protocol._subscription = None, which should get the protocol out of pub/sub state as far as I can tell.

@dfee
Copy link

dfee commented Sep 26, 2016

@cderwin perhaps the example for pubsub just creates a new connection because there is no way to unsubscribe?

Did you end up just creating a new connection, or some other hackery?

@cderwin
Copy link

cderwin commented Sep 26, 2016

Python being python, I just set the private fields to the values noted in my comment above in .close() method in a wrapper of the subscriber, and I haven't had any problems yet.

Here's a gist of approximately what I did: https://gist.github.com/cderwin/2cda20e947de75b759699d291123e2cd

That said, lately I've been thinking this might be a mistake, since you still cannot use the connection for anything else while it is a subscription, and subscriptions tend to be long-running. This means that if I start with a pool size of 10, and I start 10 subscriptions, no redis commands can run until the subscription is closed (which could be arbitrarily long, i.e. if a subscription is open as long as a user is connected to a websocket).

A solution would obviously be to have a dynamically-sized pool, but I haven't looked at how you would do that (AFAIK the pools implemented in this package are all of static size).

@adamrothman
Copy link

I think the best way to handle this is to multiplex a single Redis subscription to the rest of your application. This way you only tie up one connection (which is still free to perform any pub/sub related commands like SUBSCRIBE, UNSUBSCRIBE, etc.). Each WebSocket connection handler would talk to your singleton object that actually manages the Redis subscription.

@dfee
Copy link

dfee commented Sep 26, 2016

Because I'm coupling the PubSub with websockets, I'm just creating a new connection outside of the pool for every connection.

        ps_connection = await asyncio.wait_for(
            asyncio_redis.Connection.create(
                host=self.pool.host,
                port=self.pool.port,
            ),
            None,
        )

This might be a terrible idea, but it allows me to close the subscriber down (freeing up resources) at the expense of not honoring the concept of poolsize.

@adamrothman
Copy link

adamrothman commented Sep 26, 2016

@dfee I'm not sure closing the subscriber down really buys you anything if you just have to create another one when the next WebSocket opens.

@dfee
Copy link

dfee commented Sep 26, 2016

Yet, if I can't drop a subscriber, then one consumer will suck up another connection from the pool that I can't disconnect every time they re-establish a websocket connection.

And, if the consumer wants to leave then I'm still holding onto a subscribe for them.

@cderwin
Copy link

cderwin commented Sep 26, 2016

@adamrothman If you multiplex a single Redis subscriber won't you have to reimplement pub/sub for all the clients that use that subscriber?

@dfee I think allocating a new connection for each new client (what you're doing) is the best thing to do right now, but I also think there ought to be a way to reuse connections.

@dfee
Copy link

dfee commented Sep 26, 2016

@cderwin agree completely.

@adamrothman
Copy link

@dfee Ah I see. See my earlier comment.

@cderwin Yes, but you can do it in a relatively thin way. I'm happy to share a gist if it would be helfpul. Consider a situation where you have thousands of clients connected to your service via WebSocket, but because your service is pretty efficient, it only requires 10 or so servers to satisfy all that traffic. Would your rather have to make 1 connection to Redis per client? Or 1 connection to Redis per server?

@dfee
Copy link

dfee commented Sep 26, 2016

@adamrothman Sorry, I'm still not clear. You're suggesting that I have a singleton that manages subscribers (in addition to the entire connection)? It seems cheaper to have Redis handle the PubSub then to maintain a single PubSub for all possible channels and then sort out who-gets-what in Python?

Or, are you suggesting that I just re-allocate subscribers… attaching them to active clients?

@adamrothman
Copy link

@dfee The PubSub singleton, when it starts up, gets a single connection from your Redis connection pool. The rest of the connections in the pool are free for you to do other Redis operations. Each instance's singleton is only subscribed to the channels it's asked for by any clients connected to that particular instance.

When a client connects and asks for a subscription to a channel the singleton is not already subscribed to, it does a SUBSCRIBE. Likewise when the last client connected to an instance that wanted a subscription goes away, the singleton does an UNSUBSCRIBE.

@dfee
Copy link

dfee commented Sep 26, 2016

@adamrothman redis internals are where i'm uncertain, then. If I have 1000 clients subscribed to their notifications and there are 5 app channels, then that's SUBSCRIBE on (1000 + 5) redis channels. Maybe even set it to 10,005 for as many users. Will SUBSCRIBE fall over on that?

@adamrothman
Copy link

@dfee All of that state is managed by the Redis server you're connecting to so while I'm not an expert on the internals, I have no reason to believe a single connection subscribed to 10,000 channels would overload a Redis server capable of handling 10,000 connections each subscribed to a single channel.

@dfee
Copy link

dfee commented Sep 27, 2016

Thanks @adamrothman. I've offloaded the problem to a singleton. Cheers :)

@adamrothman
Copy link

Glad I could help @dfee!

@tgy
Copy link
Author

tgy commented Sep 27, 2016

I think I can close that issue now 😄

@tgy tgy closed this as completed Sep 27, 2016
Sign up for free to join this conversation on GitHub. Already have an account? Sign in to comment
Labels
None yet
Projects
None yet
Development

No branches or pull requests

4 participants