-
Notifications
You must be signed in to change notification settings - Fork 488
New issue
Have a question about this project? Sign up for a free GitHub account to open an issue and contact its maintainers and the community.
By clicking “Sign up for GitHub”, you agree to our terms of service and privacy statement. We’ll occasionally send you account related emails.
Already on GitHub? Sign in to your account
dynamic subscription mode breaks remote socket.emit() #524
Comments
Upon further consideration, I think, regardless of this issue, it makes sense to have an option for creating separate channels for private rooms. In our scenario, we have ~1k clients connected to ~10 servers, all communication is 1:1, and the connections stay open for a long time. Using separate channels significantly lowers the overall redis bandwidth, and the subscription/unsubscription impact shouldn't be significant. We could create our own "public" channel for each client, but using the existing ones seems like the cleanest option. I implemented this, along with the previous fix, in #526. We're already running this in production, and it reduced the overall redis load by 50 - 60 % in our setup. |
Hi! I could indeed reproduce the issue, thanks for reporting this. This happens when calling |
By this, you mean local emits? If yes, that's right, it only affects emits on remote sockets. |
When using the sharded adapter with
subscriptionMode: "dynamic"
, messages sent to the socket's private room viasocket.emit()
get lost without any warning or error.It happens because in this check, such messages look just like any other single-room broadcasts, so
useDynamicChannel
ends up beingtrue
but private rooms are excluded here so there are no subscribers.I can see a couple of possible solutions here, but none is ideal:
useDynamicChannel
tofalse
in such case. I made an attempt on this in fix(sharded): private room broadcast in dynamic mode #525 but it seems somewhat tricky.socket.emit()
won't work in this mode.socket.emit()
won't work without it.The text was updated successfully, but these errors were encountered: