Skip to content
New issue

Have a question about this project? Sign up for a free GitHub account to open an issue and contact its maintainers and the community.

By clicking “Sign up for GitHub”, you agree to our terms of service and privacy statement. We’ll occasionally send you account related emails.

Already on GitHub? Sign in to your account

CEFSharp freezes on call to Dispose method. #2105

Closed
igor-eremyashev opened this issue Aug 4, 2017 · 14 comments
Closed

CEFSharp freezes on call to Dispose method. #2105

igor-eremyashev opened this issue Aug 4, 2017 · 14 comments
Milestone

Comments

@igor-eremyashev
Copy link
Contributor

  • What version of the product are you using?
    CEFSharp v. 57.0.0 from Nuget.

  • What architecture x86 or x64?
    x64

  • On what operating system?
    Win10

  • Are you using WinForms, WPF or OffScreen?
    WPF

  • What steps will reproduce the problem?
    I have several ChromiumWebBrowser instances in my application. Sometimes when I create new instances and dispose old ones the call to ChromiumWebBrowser.Dispose() hangs.
    After that other instances crash with the following exceptions:
    Exception thrown: 'System.IO.PipeException' in System.ServiceModel.dll Exception thrown: 'System.IO.PipeException' in System.ServiceModel.dll Exception thrown: 'System.IO.PipeException' in System.ServiceModel.dll Exception thrown: 'System.IO.PipeException' in System.ServiceModel.dll Exception thrown: 'System.ServiceModel.CommunicationException' in System.ServiceModel.dll Exception thrown: 'System.ServiceModel.CommunicationException' in System.ServiceModel.dll Exception thrown: 'System.IO.PipeException' in System.ServiceModel.dll Exception thrown: 'System.ServiceModel.CommunicationException' in System.ServiceModel.dll
    If I remove the Dispose call, everything works fine and this exceptions never happen.
    This issue can be reproduced using SpawnBrowsersWindow.xaml view from CefSharp.Wpf.Example.
    When I launch the sample with StartupUri set to this view and click Test button, it creates up to 10 windows and then freezes at MethodRunnerQueue:58.
    I assume the PipeExceptions are a side effects of this freeze due to WCF timeout.

  • What is the expected output? What do you see instead?
    The call to Dispose() should end up in reasonable amount of time of throw an exception if something went wrong. For now it looks like a deadlock.

  • Please provide any additional information below.
    CEF log doesn't contain anything but a regular web browser console output.
    I can reproduce this issue using latest master branch code, v. 57.0.0 from Nuget and v. 51.0.0 (I switched to v. 57 to check if the issue still exist).

cef_binary_3.2987.1601.

@valhentai
Copy link

I have exactly the same problem, same exceptions. I use version 55 from Nuget.
Everytime I call Dispose on a ChromiumWebBrowser instance, my application freeze for 2-3 second.
I tried to call Dispose in a background thread to avoid freezing the UI but Dispose need to be called on the thread that created the ChromiumWebBrowser.
In addition, not only it is slow but it also does not dispose completly. The CefSharp.BrowserSubProcess created by each instance ChromiumWebBrowser are still visible in the task manager. They only disappear when the application is closed. After a while it slow down the whole application if the user keep opening and closing tabs in the web browser.

@Clemani
Copy link

Clemani commented Aug 25, 2017

Already reported, and seems to be still not solved.
See #1304

@merceyz
Copy link
Member

merceyz commented Aug 26, 2017

Duplicate of #1304

@merceyz merceyz marked this as a duplicate of #1304 Aug 26, 2017
@merceyz merceyz closed this as completed Aug 26, 2017
@igor-eremyashev
Copy link
Contributor Author

igor-eremyashev commented Aug 29, 2017

@merceyz I'm not sure is the issue is a duplicate. #1304 is related with WCF timeouts, but the issue can be reproduced even if CefSharpSettings.WcfEnabled is set to false. It looks more like a deadlock that occurs in MethodRunnerQueue.cs:58

public void Stop()
{
	if (!running)
	{
		return;
	}

	lock (lockObject)
	{
		if (running)
		{
			cancellationTokenSource.Cancel();
			stopped.WaitOne();
			//clear the queue
			while (queue.Count > 0)
			{
				queue.Take();
			}
			cancellationTokenSource = null;
			running = false;
		}
	}
}

@Clemani
Copy link

Clemani commented Aug 29, 2017

Do you call RegisterJsObject? This sets WcfEnabled = true
As workaround you can try to set WcfTimeout to a very small value before calling Dispose.
Oh, i see now you use WPF... maybe really another issue.
Maybe calling functions on the wrong thread, always use InvokeAsync.... just some quick ideas.

@valhentai
Copy link

for me using RegisterAsyncJsObject instead of RegisterJsObject and setting WcfEnabled to false solved the freeze problem.

But I still have the problem with CefSharp.BrowserSubProcess not closed. And if a youtube video was playing, the sound is still running after the dispose. Exception thrown: 'System.OperationCanceledException' in System.dll is logged in the console during the dispose.

@igor-eremyashev
Copy link
Contributor Author

@Clemani No, I don't call RegisterJsObject. I use SpawnBrowsersWindow.xaml from CefSharp.Wpf.Example. The code doesn't use any threads other than UI thread, all it does is creating and disposing browser objects. I had the issue with WCF timeout before, reducing the timeout solved it, so I think this is another issue. Could someone launch the sample (SpawnBrowsersWindow.xaml) to make sure it can be reproduced and reopen the issue if so?

@igor-eremyashev
Copy link
Contributor Author

@merceyz Could you reopen this issue? It is not a duplicate, it is just a deadlock in MethodRunnerQueue.
It is waiting for 'stopped' event in Stop method. The problem is the same cancellation token passed to ConsumeTasks method is also passed to Task.Factory.StartNew method. If the Stop method is called fast enough the task is cancelled before it is started and the event Stop waits for will never be set.
Line MethodRunnerQueue:43 should be
Task.Factory.StartNew(ConsumeTasks, CancellationToken.None, TaskCreationOptions.LongRunning, TaskScheduler.Default);
to avoid the issue.

@merceyz
Copy link
Member

merceyz commented Aug 30, 2017

I'll re-open but your solution isn't valid

@merceyz merceyz reopened this Aug 30, 2017
@igor-eremyashev
Copy link
Contributor Author

@merceyz Could you explain why the solution is not valid? An alternative approach would be to create the 'stopped' event in signalled state and reset it in the beginning of try block in ConsumeTasks method. It would be more optimal as in some cases we can avoid scheduling of the task at all but I think the first solution would do its work too.

@merceyz
Copy link
Member

merceyz commented Aug 30, 2017

Looked closer into it now and your first suggestion would fix the issue, but might be incomplete.
I don't know if it might happen but it looks like there could be a race condition in Start and Stop with the check outside of the lock. I don't see any harm in removing them, thoughts?.

Would you like to make a PR for this or should I take care of it?

@merceyz merceyz removed the duplicate label Aug 30, 2017
@igor-eremyashev
Copy link
Contributor Author

Start implementation looks like a correct implementation of double-check locking. I can see an issue with Stop - if a context switch occurs between Task.Factory.StartNew and running = true then Stop just exits without doing any cleanup. Removing the check outside the lock in Stop solves this problem. I'll prepare a PR for this if you don't mind.

@merceyz
Copy link
Member

merceyz commented Aug 30, 2017

I don't mind at all, I would prefer it if the checks where removed from both functions though. There is no harm in removing them

@merceyz merceyz added this to the 59.0.0 milestone Aug 30, 2017
@amaitland
Copy link
Member

An alternative approach would be to create the 'stopped' event in signalled state and reset it in the beginning of try block in ConsumeTasks method. It would be more optimal as in some cases we can avoid scheduling of the task at all

This would be the better solution to the problem, at the moment the Task is unnecessarily created.

@amaitland amaitland modified the milestones: 59.0.0, 62.0.0 Dec 13, 2017
Sign up for free to join this conversation on GitHub. Already have an account? Sign in to comment
Labels
None yet
Projects
None yet
Development

No branches or pull requests

5 participants