You signed in with another tab or window. Reload to refresh your session.You signed out in another tab or window. Reload to refresh your session.You switched accounts on another tab or window. Reload to refresh your session.Dismiss alert
It seems that Sinks.many().multicast() keeps subscribers even after cancelling subscription if subscription.dispose() executes concurrently.
We had this memory leak issue #3001 so we upgraded to 3.4.17 and it went much better. But there is still a memory leak in another emitter, SinkManySerialized. Here is a test which reproduces the issue:
The test passes with a single thread executor and it is more likely to fail with 2-5 threads. But it is expected that there is no subscribers even if subscription.dispose() executes concurrently.
Workaround
As a temporary solution we had to do all the subscription.dispose() calls in a single thread executor.
Environment
Reactor version: 3.4.17
Spring-webflux, netty
JVM corretto 17
The text was updated successfully, but these errors were encountered:
wow, this bug has been around since April 2017 😱 in case of contention during removal of inners (ie. subscribers cancellation, to a lesser extent subscriber completion), when the compareAnSet(oldSubscriberArray, reducedSubscriberArray) fails, the EmitterProcessor's implementation doesn't loop back...
The EmitterProcessor#remove method causes retaining of subscribers if
the removal is done in parallel, as the CAS failure doesn't cause a
new loop iteration.
This applies to direct instantiations of EmitterProcessor as well as
Sinks.many().onBackpressureBuffer sinks.
This commit fixes the method to loop back when the CAS fails.
Fixes#3028.
It seems that
Sinks.many().multicast()
keeps subscribers even after cancelling subscription ifsubscription.dispose()
executes concurrently.We had this memory leak issue #3001 so we upgraded to 3.4.17 and it went much better. But there is still a memory leak in another emitter, SinkManySerialized. Here is a test which reproduces the issue:
The test passes with a single thread executor and it is more likely to fail with 2-5 threads. But it is expected that there is no subscribers even if
subscription.dispose()
executes concurrently.Workaround
As a temporary solution we had to do all the
subscription.dispose()
calls in a single thread executor.Environment
The text was updated successfully, but these errors were encountered: