New issue
Have a question about this project? Sign up for a free GitHub account to open an issue and contact its maintainers and the community.
By clicking “Sign up for GitHub”, you agree to our terms of service and privacy statement. We’ll occasionally send you account related emails.
Already on GitHub? Sign in to your account
SignalR connections drain with Redis backplane #4578
Comments
Are you sure the clients are really disconnecting and not keeping their connections open? Can you enable SignalR tracing on the server and provide the logs in this issue? Are the redis connections really too much? I don't see it go above 30? Ultimately, SignalR uses a single StackExchange.Redis ConnectionMutliplexer (see https://stackexchange.github.io/StackExchange.Redis/Basics.html) to manage the redis connection pool. So if there are too many connections, that'd be an issue in StackExchange.Redis, but I haven't heard complaints about this before. As for the custom configuration settings. Nothing I see should cause any major problems, though the shorter DisconnectTimeout could give clients less time to automatically reconnect given network issues and the shorter KeepAlive will cause more keep alive messages to be sent. The smaller DefaultMessageBufferSize can cause more messages to be dropped if sent to quickly, and the shorter ConnectionTimeout should only affect long polling making "empty" polls shorter and slightly less efficient. In general, we don't recommend changing any of these without testing in your app showing a real improvement. Take a look at https://docs.microsoft.com/en-us/aspnet/signalr/overview/guide-to-the-api/handling-connection-lifetime-events to see what most of these timeouts mean. |
Yes, i'll enable tracing and get logs for few days. That's very strange that every day we are getting ~500 additional connections that weren't removed until application pool recycle. On redis we clearly see that connection count is going higher day-by-day. Timeout settings were changed because on default settings we got >11k connections in just one day and DefaultMessageBufferSize was reduced because of abnormal memory usage on default settings. |
FYI, i'll add logs on monday. Thanks for help! |
I've got >15gb logs per weekend so i'll post only 2gb of them. |
Looking at transports.log_1.txt, I see 4,783 instances of "is New" vs only 1,806 instances of "Removing connection". This looks like clients opening more connections than they are closing. Do you have any evidence that this isn't the case? |
Yes, but why connections count kept growing in this case? Even with short DisconnectTimeout we still have it growing. |
meanwhile connections count keep growing day by day, yesterday it peaked to 6.7k |
Hi guys! We're using SignalR 2.4.2 hosted in IIS with these settings:
GlobalHost.Configuration.ConnectionTimeout = TimeSpan.FromSeconds(30); GlobalHost.Configuration.DisconnectTimeout = TimeSpan.FromSeconds(15); GlobalHost.Configuration.KeepAlive = TimeSpan.FromSeconds(5); GlobalHost.Configuration.DefaultMessageBufferSize = 100;
Also we have setup a Redis backplane in AWS on Redis 4.0.10.
Transport: SSE (we are hosted on AWS under classic load balancer, but we are ready to switch to ALB with websocket support)
As i see from connections logs - SignalR can't manage connections properly and connections were cleared only when app pool is recycled
https://www.screencast.com/t/zCUfVRjwlUxr tcp connections on instances
https://www.screencast.com/t/NLW9Lze2q redis connections
Is there any configuration settings that can result in such behavior?
The text was updated successfully, but these errors were encountered: