Skip to content
New issue

Have a question about this project? Sign up for a free GitHub account to open an issue and contact its maintainers and the community.

By clicking “Sign up for GitHub”, you agree to our terms of service and privacy statement. We’ll occasionally send you account related emails.

Already on GitHub? Sign in to your account

EmitterProcessor: cyclical LockSupport.parkNanos(10) gives CPU load 100% #2049

Closed
mayras opened this issue Feb 16, 2020 · 5 comments
Closed
Labels
help wanted We need contributions on this status/need-investigation This needs more in-depth investigation type/bug A general bug
Milestone

Comments

@mayras
Copy link

mayras commented Feb 16, 2020

EmitterProcessor has next code inside onNext(), line 266:

		while (!q.offer(t)) {
			LockSupport.parkNanos(10);
		}
		drain();

In some cases, flow statement "while" executes forever and it gives very high CPU usage, almost 100%, because LockSupport.parkNanos(10) executes without stop until some subscriber will be attached, all incoming threads stay inside "while" who invoked onNext, because queue is full and there is no drain until new subscriber is attached. If you look inside method "drain", you will see that queue.poll is executed when at least one subscriber is attached only.

Possible Solution

Not so sure about solution, but it can be partial draining if queue is full

Your Environment

  • Reactor version(s) used: 3.3.2.RELEASE
  • JVM version (javar -version): 11.0.6
  • OS and version (eg uname -a): win10pro 1909
@simonbasle
Copy link
Member

@smaldini I need your insight on that one, but IIRC the intention is that filling up the queue is only ever supposed to be a temporary situation. You're not supposed to use an EmitterProcessor to feed a ton of data without connecting at least one Subscriber in a timely fashion.

@osi
Copy link
Contributor

osi commented Apr 6, 2020

I hit a related problem with this construct; In trying to write a test for the EmitterProcessor queue being full, I realized that it is ignoring thread interruption while in this hot loop (which means that I can't use JUnit's @Timeout to write for prevention)

@simonbasle simonbasle added help wanted We need contributions on this status/need-investigation This needs more in-depth investigation type/bug A general bug labels Apr 9, 2020
@simonbasle simonbasle added this to the Backlog milestone Apr 9, 2020
@smaldini
Copy link
Contributor

@simonbasle you are correct. We should be explicit on this behavior when building the processor maybe. Or provide specific API for "whenSubscriber(Consumer xxxx)" and remove this behavior altogether in a future version. This is also a problem of over producing.

@mayras
Copy link
Author

mayras commented Jul 15, 2020

Guys, can we fix this issue? It's really painful and happens everywhere, CPU 100%, I don't know how people use Reactor in production, it happens even with simple reactor-logback:

image

@simonbasle
Copy link
Member

@smaldini will you be able to take that one ?

@simonbasle simonbasle modified the milestones: Backlog, 3.4.0-M2 Aug 5, 2020
simonbasle added a commit to smaldini/reactor-core that referenced this issue Aug 5, 2020
This commit replaces the wait-loop in 10ns increments with a fail-fast
which returns Emission.FAIL_OVERFLOW if the backpressure queue is full.

Reviewed-in: reactor#2218
Co-authored-by: Stephane Maldini <smaldini@netflix.com>
Co-authored-by: Simon Baslé <sbasle@vmware.com>
Sign up for free to join this conversation on GitHub. Already have an account? Sign in to comment
Labels
help wanted We need contributions on this status/need-investigation This needs more in-depth investigation type/bug A general bug
Projects
None yet
Development

No branches or pull requests

4 participants