New issue
Have a question about this project? Sign up for a free GitHub account to open an issue and contact its maintainers and the community.
By clicking “Sign up for GitHub”, you agree to our terms of service and privacy statement. We’ll occasionally send you account related emails.
Already on GitHub? Sign in to your account
Move GetAllByUserId to IOnlineClientManager instead of implementing as a static extension #1704
Comments
Hi, You're right. I'll move it to IOnlineClientManager, so you can specialize it. But notice that; SignalR has it's own scaleout system. To be honest, I never tested it. But I expect that it distributes events to all instances. That means if a client is connected, same "connected" event should be occured in all SignalR server instances. Thus, all OnlineClientManagers of all instances will be identical in theory. SignalR scaleout has already redis support (https://www.asp.net/signalr/overview/performance/scaleout-with-redis). |
Thanks I'm already did signalR's scaleout with redis, and test it with my localhost two different sites share the same hub. Maybe I can log the clients "connected" events to confirm it. |
As I said, didn't try before. Please share your experience once you test it. Thanks. |
I have ran and checked logs:
The client 840dd8b6-9824-4d46-9e03-180c2fe628df didn't logged in SiteB's log. |
When I replace So, wherever you get a right connectionId ,even in different process(but share the same hub and in same backplane), the scaleout system of signalR can handle it. Only except the events about the connection itself. |
Oh, you say SignalR distributes message to all servers, but not distribute connected/disconnected events. That's very bad! Think that we are showing online user count in an application. In that case, every server will have different values. I think this is a wrong implementation of SignalR. |
From this tutorial on asp.net/signalR, if you want to share client connections through multible servers you need a shared repository for instance SQL Database |
…Manager instead of implementing as a static extension.
@personball Is there any chance you can share your RedisOnlineClientManager code? |
@personball Thank you! |
@personball Thanks for posting that code! I was working on actually creating an implementation of OnlineClientManager that uses the built in ABP caching service. I started that before I saw your post here. My thought was if you are using Local cache, it would just almost work like normal, and if you had Redis cache enabled, it would use Redis. Your implementation just uses Redis directly. Curious to know if there was a reason for that or just your preference when implementing? @hikalkan Has there been any thought into rolling this into the framework? The ability to use Redis for the OnlineClientManager really needs to be part of the framework (in my opinion). Running this in azure is almost a sin if you can't enable auto scale out features so I need to roll this into my code as well. Jobs are easy enough to handle, and caching already has the Redis option, so this is the last step for me to be able to scale out in Azure. Thanks again for posting. This will help me a ton. |
@personball I think I see the issue with using the Cache. It's the sliding expiration policy cache that may get in the way. I think your implementation is much better than what I was working towards. Never mind on my question above. |
The default implementation of
IOnlineClientManager
use aConcurrentDictionary<string, IOnlineClient>
to store the relations between userId and connectionId.It has problem when sites are deploy to two or more servers, or just one server but enable webgarden in iis (has multi process).
So, I wrote a
RedisOnlineClientManager
to replaceOnlineClientManager
, but found it may cause performance problem when someone callGetAllByUserId
inOnlineClientManagerExtensions
throughGetAllClients
.Above all, I think It's a better practice to define
GetAllByUserId
inIOnlineClientManager
, and leave the how-to-query to the real store mechanism (not a ConcurrentDictionary).The text was updated successfully, but these errors were encountered: