-
Notifications
You must be signed in to change notification settings - Fork 344
DynamicSoundEffectInstance AccessViolationException #34
Comments
I really want to get this crash dump file to you guys, just in case you have trouble with the repro, so let's try a split archive file: |
ok thanks a lot, I will check that. |
I haven't heard back from you guys, so here's my analysis using WinDbg and SOSEX: There are two threads of interest; thread 0 is waiting for the DynamicSoundEffectInstance worker task (a long running thread) to complete (in DynamicSoundEffectInstance.DestroyImpl):
and thread 10 is the DynamicSoundEffectInstance worker thread that threw the AccessViolationException:
The faulting IP was at 0x5d33c6e3, which is near the end of XAudio2_7!XAUDIO2::CX2SourceVoice::SubmitSourceBuffer:
Here the plot thickens because [EBP+8] was used throughout IXAudio2SourceVoice::SubmitSourceBuffer up to this point. Looking more closely at thread 0, I suspect a race condition caused by DynamicSoundEffectInstance.DestroyImpl with the help of .NET Reflector: internal override void DestroyImpl()
{
base.AudioEngine.UnregisterSound(this);
base.DestroyImpl();
Interlocked.Decrement(ref numberOfInstances);
if (numberOfInstances == 0)
{
awakeWorkerThread.Set();
if (!workerTask.Wait(500))
{
throw new AudioSystemInternalException("The DynamicSoundEffectInstance worker did not complete in allowed time.");
}
awakeWorkerThread.Dispose();
}
} calls base.DestroyImpl just before waiting for the DynamicSoundEffectInstance worker task to complete: internal override void DestroyImpl()
{
if (this.soundEffect != null)
{
this.soundEffect.UnregisterInstance(this);
}
this.PlatformSpecificDisposeImpl();
} which calls PlatformSpecificDisposeImpl: internal void PlatformSpecificDisposeImpl()
{
if (this.SourceVoice != null)
{
this.SourceVoice.DestroyVoice();
this.SourceVoice.Dispose();
}
} which destroys the SourceVoice. |
Thanks, look very useful/helpful! |
The worker thread is as follows: private static void WorkerThread()
{
while (true)
{
DynamicSoundEffectInstance instance;
awakeWorkerThread.WaitOne();
if (numberOfInstances == 0)
{
return;
}
while (instancesNeedingBuffer.TryDequeue(out instance))
{
if (!instance.IsDisposed && (instance.BufferNeeded != null))
{
instance.BufferNeeded(instance, EventArgs.Empty);
}
}
}
} and is normally signalled by CheckAndThrowBufferNeededEvent: private void CheckAndThrowBufferNeededEvent()
{
if ((this.internalPendingBufferCount <= 2) && (this.BufferNeeded != null))
{
instancesNeedingBuffer.Enqueue(this);
awakeWorkerThread.Set();
}
} The signalling from DestroyImpl is just for shutdown. So to test this theory, a suspected race condition can be flushed out by adding a 'sleep' in just the right place. In this case, it looks like adding a small sleep to the BufferNeeded callback (to delay the worker thread from exiting) could do the trick. Indeed, I get a crash within just a few iterations after doing this... |
Specifically, I changed ParadoxAudioService.OnDynamicSoundEffectBufferNeeded in the repro to: private void OnDynamicSoundEffectBufferNeeded(object sender, EventArgs e) // audio thread
{
Task.Delay(10).Wait();
...
} |
The resulting crash dump is different: |
Analysis using WinDbg and SOSEX: This time the DynamicSoundEffectInstance worker thread is not in the stack traces. The reason seems to be because it threw a NullReferenceException and exited. This was observed by the DynamicSoundEffectInstance worker task (via Wait) and rethrown from this context:
|
@xen2 No need to apologise, you guys have been very response for everything else and are awesome as far as I'm concerned! ;) I had some time and have had more than my share of experience dealing with nasty crashes, so hopefully this analysis helps others learn too. |
Thanks, should be able to fix it easily with all this info :) |
I have some more 'evidence'. I wanted to know more about the NullReferenceException on the DynamicSoundEffectInstance worker thread, and I noticed after the fact that ProcDump was indicating there was still an access violation exception, but the only way to catch this in action this was to catch first chance exceptions. You can change the arguments passed to ProcDump in the Test-Process.ps1 script to:
But with the delay introduced above flushing out the crash within just a few iterations, it was easier to run the process directly via WinDbg. (Visual Studio doesn't quite cut it for debugging this sort of stuff, which is a shame because I prefer it to the arcane mysteries of WinDbg.) Quick and dirty analysis using WinDbg and SOSEX: Both our suspect threads are in the stack traces when the debugger stops with a first chance access violation exception at IP 0x05611186. Then digging deeper into the worker thread, which takes a bit more effort, we find the exception is occurring in the middle of the JIT generated code for SharpDX.XAudio2.SourceVoice.SubmitSourceBuffer:
i.e. just after the call to SharpDX.XAudio2.AudioBuffer.__MarshalTo.
Looking at the code with Reflector: internal unsafe void SubmitSourceBuffer(AudioBuffer bufferRef, IntPtr bufferWMARef)
{
AudioBuffer.__Native native = new AudioBuffer.__Native();
bufferRef.__MarshalTo(ref native);
Result result = (Result) **(((IntPtr*) base._nativePointer))[(int) (((IntPtr) 0x15) * sizeof(void*))](base._nativePointer, (IntPtr) &native, (void*) bufferWMARef);
bufferRef.__MarshalFree(ref native);
result.CheckError();
} reveals more clearly the intent; it's preparing to jump through the function table for SharpDX.XAudio2.SourceVoice! More evidence that destroying the SourceVoice (on the other thread) is most likely the root cause. |
Fixed in newly released beta02. |
Verified as fixed; I eyeballed the commit and then for extra peace of mind I put beta02 through 1000 iterations with my test script. Nice work! |
Good :) thanks! |
Following up from Paradox Answers.
I've managed to create a reasonably minimal repro project that exhibits the crash behaviour. Since this involves a race condition, I've also created a PowerShell script that I used to loop until the crash appears (and captures a crash dump). This can take a few thousand iterations, which in real time could take an hour or more.
You'll need the following files:
Then to repro the crash:
Since the crash happens during application shutdown, the script loops over the following:
The number of iterations is just the maximum. It will stop as soon as it repros the crash and generate a dump file in the same directory as the executable under test.
The text was updated successfully, but these errors were encountered: