-
-
Notifications
You must be signed in to change notification settings - Fork 1.6k
New issue
Have a question about this project? Sign up for a free GitHub account to open an issue and contact its maintainers and the community.
By clicking “Sign up for GitHub”, you agree to our terms of service and privacy statement. We’ll occasionally send you account related emails.
Already on GitHub? Sign in to your account
[EventAggregator] Memory problem with EventAggregator and never published message #1505
Comments
If the list of subscribers isn't being reduced, it's because the subscriber parent is still in memory and not collected. We have tests that validate that subscribers that are collected and automatically unsubscribed. This type of behavior normally points to an issue with the application code and object remaining in memory. Unless you can provide a memory analyzer that shows the EA being the object that roots the objects. |
Thanks for your answer, but i do not have any additional code as I have shown above. To prove it is a bug, just execute the following sample code: The given output is constantly: SubscriptionList is the amount of entries within the EventBase:
So the created instances are all deleted, but the internal subscriptionlist, is growing. |
Have you tried having a proper class for the event |
Adapted the sample code: https://gist.github.com/softwaretirol/9ba839671a4ceeb1c81fe0e7b1907dc8 Completly same behavior, but i was expecting that it wont change, because internally a Dictionary<Type, EventBase> is hold, and it is no difference in my view of using a derived type here. |
I'll try to look at this when I find time. however, our unit tests are showing the EA working properly, and you have not provided a running application with memory snapshots to support your issue. This means I will have to recreate everything and create my own snapshots which will take more time. In the mean time, Prism is open source, so if you have time it would be great if you could look through the source and see what you find. Maybe even submit a fix if you find one :) |
The
This list contains Instances of But the list that is containing the EventSubscription gets only pruned when calling So @brianlagunas you are right, that the subscribers are not hold, but the EventSubscription do like @softwaretirol said. Maybe that can help you. |
Hello @brianlagunas, I have create a sample WPF app which demonstrate the unexpected behaviour. Steps: Without publishing the memory increase, you need only your Task Manager to verify my observation. Regard, |
Thanks @FamularoA. In my experience the Task Manager is not a good indicator of memory leaks. I will use a proper memory analyzer. |
…or and never published message
A big thank you to @FamularoA for the sample. I now fully understand the issue. I guess when this was first written, no one considered that there would be a scenario that would have so many constant registrations without ever publishing a message. Like @softwaretirol said, it's "very rare". I see a PR has already been submitted for this issue. Great job @softwaretirol. You're contribution is greatly appreciated. Good find! |
#1505 [EventAggregator] Memory problem with EventAggregator and never…
While I agree with something like this needing to happen, I just updated from
|
Can you measure the difference and provide the numbers? We may need to think of another approach. Maybe a method that must be manually called in the rare scenario where you have constant subscriptions and no messages being publshed. |
Sure thing. I can work on a better benchmark test (i.e. BenchmarkDotNet) but since that runs a long time, here is a quick summary: var ea = new EventAggregator();
var @event = ea.GetEvent<PubSubEvent>();
var watch = Stopwatch.StartNew();
for (int i = 0; i < 10000; i++)
@event.Subscribe(OnMessage);
var elapsed = watch.Elapsed;
EDIT: I discovered this issue in a UWP app, but those numbers are a .NETCore 2.1 Console App. Runtime on the long one also doesn't show a lot of GC activity, the GC was showing up in the UWP profiler. |
WOW! That is significant! This needs to be reverted |
@softwaretirol we need an alternate approach as the current solution has introduced a massive performance issue. |
Hey folks, that sounds very bad, just see #1646 for a fix. @brianlagunas what is your opinion to this solution? I really do not like any time based solution, but in this case it might be an easy solution to catch both situations? The internal behavior changed a little, it will be checked at which time the last prune was run, and if it is 1 minute ago, it will do the prune. |
The reason for the tremendous loss of performance lies in the complexity of O(n²) instead of O(n) by subscribing to an event in a loop. This is because each time adding a subscriber, the whole list will be inspected for dead subscribers. |
I really don't like the timed approach. Honestly, having massive registrations without a publish is more of an edge-case. So I would prefer to provide a way to handle this that would have to be opt-ed into. Maybe a public method that allows you to Flush the registrations manually. |
#MeToo Pushed a better solution, for performance reasons I replaced the internal List to a LinkedList. At every publish/subscribe it will getting cleaned up if necessary (which is simply an unlink). For not creating a delegate every time the "GetExecutionStrategy" the IsAlive Property of the WeakReference is passed through. Weird the publish itself is getting faster as before because the prune is faster with that solution. |
That looks like an improvement to the performance, but still loops through a (potentially) large array on inserts. What about something like: abstract class EventBase
{
//Subscribe checks this and calls Prune()
protected virtual AutoPrune { get; } = false;
}
//User's event class that they know doesn't have many Publishes
class Rare : PubSubEvent
{
protected override AutoPrune => true;
} |
For publishing the iteration is needed at all, so it effects only the subscribing performance. var eventAggregator = new EventAggregator();
var watch = Stopwatch.StartNew();
for (int i = 0; i < 100000; i++)
{
eventAggregator.GetEvent<PubSubEvent>().Subscribe(DoSomething);
}
for (int i = 0; i < 100000; i++)
{
eventAggregator.GetEvent<PubSubEvent>().Publish();
}
Console.WriteLine(watch.Elapsed);
Console.ReadLine(); Needs 00:00:00.3489100 on my machine. Looks quite fine if you ask me. |
I'm actually leaning towards @adamhewitt627 suggestion. Very simple, and opt-in for specific events that have a large number of subscriptions. |
It could probably use a better name, since it would still prune automatically on Publish. I also got to wondering if the new runtime events API could be of value here. (given multi-targeting, that is) Correct me if I'm wrong, but #1646 is only faster because it's using I'm picturing 3 eventual changes:
|
Yes that is crucial! In my opinion i would stick to PR #1646 , it is fast and is covering both situations very well now. But the decision is not mine 😄 . @adamhewitt627 What i would improve with your PR #1647 :
|
Hi guys, as @softwaretirol explained about the fire & forget issue, you can take a look at https://github.com/brminnick/AsyncAwaitBestPractices. It might solve the issue. |
I have reverted the changes and have added a |
This thread has been automatically locked since there has not been any recent activity after it was closed. Please open a new issue for related bugs. |
Description
We haved an EventAggregator in production and see a huge memory amount being used over a longer runtime (> 1 Day). We have looked into that issue with a profile and see many, many DelegateReference (coming from PubSubEvent) laying arround and eating up ~200MB of data.
We looked up the sourcecode and as we can say, the method "PruneAndReturnStrategies" is responsible of clearing up the subscription list of the EventBase. This method is going to be called only if someone is publishing a message of type T.
Because we have the case that it is very rare that a message will be published, but several subscriber are subscribing to the event, we have lots of dead delegates within this subscription list.
Steps to Reproduce
Expected Behavior
Actual Behavior
Basic Information
The text was updated successfully, but these errors were encountered: