-
Notifications
You must be signed in to change notification settings - Fork 1.4k
New issue
Have a question about this project? Sign up for a free GitHub account to open an issue and contact its maintainers and the community.
By clicking “Sign up for GitHub”, you agree to our terms of service and privacy statement. We’ll occasionally send you account related emails.
Already on GitHub? Sign in to your account
stuck channels on FreeSWITCH 1.10.10 (possibly transcoding is the cause) #2264
Comments
I've seen this on multiple servers as well. If using PCMU only it seems to solve the calls that get stuck. |
Not all debugging symbols were loaded. Please re-do the backtrace. |
We are also seeing this. We cannot replicate in our test environment, but it happens in production. It does not seem to be related to call volume, just randomly calls get "stuck" and it looks like the thread processing the call in FreeSWITCH is hanging. |
Downgrading to FreeSWITCH 1.10.8 fixes the issue for us, although not ideal. |
@kaaelhaa FreeSWITCH 1.10.8 because 1.10.9 is also affected? |
@kaaelhaa was libsofia downgraded as well or just FreeSWITCH? |
@kaaelhaa What is libsofia version on the system with FS 1.10.8 right now? |
I added the symbols yesterday. waiting for some stuck calls. |
@gabada If dbgsym files are installed and if you still have the core dump file you can re generate the backtrace. |
@andywolk Here is the newly generated backtrace it still has some ?? in it. not sure why. Also BKW asked me to run deadlock.py on it and there are no deadlocks |
@gabada Because there are no libsofia dbg symbols: Also when doing a backtrace please do
|
I installed libsofia
|
Oh never mind dbg for sofia. it is there. other ?? are at least from lua. We don't currently need those. Please re-generate using gdb commands I mentioned. |
Here is a new backtrace. |
Now with your help let's check if there are deadlocks.
Then open gdb and do
And see if it finds anything |
No deadlock detected. Do you have debug symbols installed? |
@andywolk this is the Sofia version on 1.10.8:
And 1.10.10 for comparison:
So the same version of |
|
@andywolk |
@andywolk |
On the 1.10.8 host:
1.10.10 for comparison:
|
Thank you. We see a pattern in both backtraces. Will analyze further. |
Hi @andywolk , is there any progress on the investigation. We are experiencing the same issue after FreeSWITCH 1.10.10 upgrade. |
@andywolk if it helps, I think I might be experiencing the same issue, and it seems that for the relevant calls, this log is printed in the logs, but not this one. Meaning that the session lock can never be acquired or released. We're currently investigating the issue, will post updates in case any further details come up. |
We are working on a solution. There is another issue filed where we can see similar things #2290 |
For those that needs the latest freeswitch (like us, because of openssl 3.0.x support), reverting the commit cited in #2290 temporarly fixes the issue (we have it in production with the revert and all is running fine). |
We are encountering this problem too, more and more frequently. Besides reverting the commit mentioned in #2290, is there any news on fixing this issue? |
Was this resolved in FreeSWITCH 1.10.11? |
No, 1.10.11 still has this bug. |
Thanks for letting me know. |
We also have this problem on 1.10.11, not an issue on 1.10.9. |
I also have this issue I created a script that do uuid kill on old channels
|
Describe the bug
FreeSWITCH channels are getting stuck. When you restart/fsctl crash freeswitch the CDR that's written says
INCOMPATIBLE_DESTINATION
.When I set my ITSP to only offer PCMU everything works as expected.
When you do a uuid_kill on the channel it says OK but doesn't actually kill it (remove it from the database). You need to restart Freeswitch to get it removed.
To Reproduce
Steps to reproduce the behavior:
Expected behavior
FreeSWITCH should handle the transcoding and the calls should release normally when they are completed.
Package version or git hash
Trace logs
freeswitch.log is a call that is still stuck.
freeswitch.log
backtrace from core file
backtrace.log
The text was updated successfully, but these errors were encountered: