-
Notifications
You must be signed in to change notification settings - Fork 2.3k
New issue
Have a question about this project? Sign up for a free GitHub account to open an issue and contact its maintainers and the community.
By clicking “Sign up for GitHub”, you agree to our terms of service and privacy statement. We’ll occasionally send you account related emails.
Already on GitHub? Sign in to your account
Connection stuck while having network outages, leading to memory leak and and application freeze #3353
Comments
Thanks for the bug report. Do you have the stack trace for any thread that might holding the lock? You mentioned that "threads are stuck until the first one that is trying to abort will not complete". Does that mean that there is a thread blocked in Are you testing more than one client simultaneously? If so, how many? How long does it take for hundreds of threads to get blocked in the |
First, please refer to an issue: #3233 As you can see on the picture, all of the threads are stuck on the System.Threading.Monitor.Enter We are using a WPF application with a single AppDomain and there is at least a single connection but in my scenario i had only one. In my scenario i get into this in about a minute, at most two. p.s. I didn't do this having SignalR.Client compiled with pdb so i don't really have any evidence that i'm right by debugging it, but i opened latest SignalR solution and tracked to the method which is stuck in the stack trace for each thread and didn't find ny other path to get into this problem. |
Attached link to a dump file. |
Thanks a lot for the dump file and the extra information. I found that this bug has been previously reported in yet another issue: #2325 The call to From looking at your dump, it appears that the client was running on a machine with an older version of .NET 4.5. I had a discussion with @Tratcher who suggested that |
First, thanks a lot for confirming my change is correct. Just one more issue to complete this thread: Now, the question is whether .Net 4.5.1 is acting differently for this one too? Please let me know what you and your team suggest for this one. |
The change you made in |
Thanks, i'll compile a custom version for us. Thanks once again. |
Any news on this. I am using SignalR Client v2.2.0 in a Xamarin app and I am getting exactly the same error. Will this be fixed in v2.2.1? If so is there any news on when this will be, it was thought this would be available after December 2015 but there is still no sign of it. This is a major issue for me as I cannot reestablish my connection due to the locking problem If there is no fix, is there a workaround you could suggest |
Hi, We are facing a similar issue like this. When we start our application, after a while all the users connected to the hub start getting reconnecting issues. In Chrome console all the users are seeing this continuously:
All the users are stuck in reconnecting mode i.e this method is continuously firing for all users:
We are using that code from the Jabbr source code. The state is changing from connected > reconnecting > diconnected> connected Seems after a while when there are many connections, all the connections to Hub remain stuck i.e are not able to converse with hub properly and hence connection gets broken and we receive "WebSocket handshake: Unexpected response code: 404" what I mentioned above. Note that there is not a fixed time for this behavior/issue. Sometimes it starts with in few minutes of "application start" and sometimes it takes around 20-30 minutes. This issue definitely seems to be because of many connections conversing at the same time to the Signalr core code via Hub. Is there any fix for it? Edit: We are using the latest version of Signalr i.e Microsoft.AspNet.SignalR.Core.dll (2.2.0) |
Any word on a fix for this? I can make this happen easily in a Xamarin app. Unfortunately all the PCL profiles only use .NET 4.5 so there is no chance of moving to a later version of the .NET framework. This is a really big issue for me. Can it be fixed or can a workaround be provided. This is also on V2.2.0 |
@sisterray Did you file a bug on Xamarin? |
@sisterray any solution for the issue? I am hitting it on xamarin.ios app |
This is a comment I place on #645 which seemed to fix the issue for me You could try a couple of things Upgrade to 2.2.1 |
@sisterray is this issue fixed ? I am using 2.2.3 and still facing the same issue |
There was no fix, I am using 2.2.1 Just follow the points I made in my previous comment. If that doesn't fix it it is probably another issue |
This issue has been closed as part of issue clean-up as described in https://blogs.msdn.microsoft.com/webdev/2018/09/17/the-future-of-asp-net-signalr/. If you're still encountering this problem, please feel free to re-open and comment to let us know! We're still interested in hearing from you, the backlog just got a little big and we had to do a bulk clean up to get back on top of things. Thanks for your continued feedback! |
It looks like there's still some activity on this thread. @nirajkr or @sisterray could one of you open a new bug and describe what you're seeing? There's a lot going on in this thread (including some slightly different but possibly related issues) so it's difficult to track down exactly which issue you're seeing :). |
2.4.0 Still the same issue long execution time of hubConnection.Stop(); in Xamarin.Froms ;/ |
I'm working with a .Net client 2.1.2
We lately upgraded it from the old 1.xxx version
Now, we are trying to simulate unstable environment were from time to time connection issues.
To be more specific, i simulated a WI-FI disabling and enabling after a while.
It leaded me to communication channel stuck: no message could be sent, no receive, not even Stop connection.
By looking deeper i found that at some point there were hundreds of threads up in the air due to bad implementation of the Heartbeat callback event.
Also, the CheckKeepAlive method has a lock on its enterence, it means that all of this threads are stuck until the first one that is trying to abort will not complete (in that case i have my doubt whether the abort is going to end at some point). So in the meanwhile the heartbeat is ticking and generating more and more threads hanging the application.
The text was updated successfully, but these errors were encountered: