Skip to content
New issue

Have a question about this project? Sign up for a free GitHub account to open an issue and contact its maintainers and the community.

By clicking “Sign up for GitHub”, you agree to our terms of service and privacy statement. We’ll occasionally send you account related emails.

Already on GitHub? Sign in to your account

OutOfMemoryError getting swallowed by SerializingExecutor #2270

Closed
biran0079 opened this issue Sep 14, 2016 · 10 comments
Closed

OutOfMemoryError getting swallowed by SerializingExecutor #2270

biran0079 opened this issue Sep 14, 2016 · 10 comments
Labels

Comments

@biran0079
Copy link

My application throws OutOfMemoryError and got swallowed by following code,
https://github.com/grpc/grpc-java/blob/master/core/src/main/java/io/grpc/internal/SerializingExecutor.java#L156

I hope to handle OutOfMemoryError in my default uncaught exception handler, but it is not happening.

Is this a bug?

@carl-mastrangelo
Copy link
Contributor

Can you please print out the stack trace logged?

@biran0079
Copy link
Author

Stacktrace follows.

ERROR [2016-09-14 19:08:02,822] io.grpc.internal.SerializingExecutor: Exception while executing runnable io.grpc.internal.ServerImpl$JumpToApplicationThreadServerStreamListener$4@36f7d68a [req=null firm=null cred=null ip=null]
! java.lang.RuntimeException: java.lang.OutOfMemoryError: Java heap space
! at io.grpc.internal.ServerCallImpl.sendMessage(ServerCallImpl.java:164) ~[grpc-core.jar:0.14.0]
! at io.grpc.stub.ServerCalls$ServerCallStreamObserverImpl.onNext(ServerCalls.java:279) ~[grpc-stub.jar:0.14.0]
! at io.grpc.stub.ServerCalls$1$1.onReady(ServerCalls.java:175) ~[grpc-stub.jar:0.14.0]
! at io.grpc.PartialForwardingServerCallListener.onReady(PartialForwardingServerCallListener.java:63) ~[grpc-core.jar:0.14.0]
! at io.grpc.ForwardingServerCallListener.onReady(ForwardingServerCallListener.java:38) ~[grpc-core.jar:0.14.0]
! at io.grpc.ForwardingServerCallListener$SimpleForwardingServerCallListener.onReady(ForwardingServerCallListener.java:55) ~[grpc-core.jar:0.14.0]
! at io.grpc.internal.ServerCallImpl$ServerStreamListenerImpl.onReady(ServerCallImpl.java:283) ~[grpc-core.jar:0.14.0]
! at io.grpc.internal.ServerImpl$JumpToApplicationThreadServerStreamListener$4.runInContext(ServerImpl.java:497) ~[grpc-core.jar:0.14.0]
! at io.grpc.internal.ContextRunnable.run(ContextRunnable.java:54) ~[grpc-core.jar:0.14.0]
! at io.grpc.internal.SerializingExecutor$TaskRunner.run(SerializingExecutor.java:154) ~[grpc-core.jar:0.14.0]
! at java.util.concurrent.ThreadPoolExecutor.runWorker(ThreadPoolExecutor.java:1142) [na:1.8.0_60]
! at java.util.concurrent.ThreadPoolExecutor$Worker.run(ThreadPoolExecutor.java:617) [na:1.8.0_60]
! at java.lang.Thread.run(Thread.java:745) [na:1.8.0_60]
! Caused by: java.lang.OutOfMemoryError: Java heap space
! at io.netty.util.internal.ConcurrentCircularArrayQueue.<init>(ConcurrentCircularArrayQueue.java:79) ~[netty-all.jar:4.1.0.CR7]
! at io.netty.util.internal.MpscArrayQueueL1Pad.<init>(MpscArrayQueue.java:243) ~[netty-all.jar:4.1.0.CR7]
! at io.netty.util.internal.MpscArrayQueueTailField.<init>(MpscArrayQueue.java:261) ~[netty-all.jar:4.1.0.CR7]
! at io.netty.util.internal.MpscArrayQueueMidPad.<init>(MpscArrayQueue.java:278) ~[netty-all.jar:4.1.0.CR7]
! at io.netty.util.internal.MpscArrayQueueHeadCacheField.<init>(MpscArrayQueue.java:286) ~[netty-all.jar:4.1.0.CR7]
! at io.netty.util.internal.MpscArrayQueueL2Pad.<init>(MpscArrayQueue.java:303) ~[netty-all.jar:4.1.0.CR7]
! at io.netty.util.internal.MpscArrayQueueConsumerField.<init>(MpscArrayQueue.java:320) ~[netty-all.jar:4.1.0.CR7]
! at io.netty.util.internal.MpscArrayQueue.<init>(MpscArrayQueue.java:49) ~[netty-all.jar:4.1.0.CR7]
! at io.netty.util.internal.PlatformDependent.newFixedMpscQueue(PlatformDependent.java:630) ~[netty-all.jar:4.1.0.CR7]
! at io.netty.buffer.PoolThreadCache$MemoryRegionCache.<init>(PoolThreadCache.java:373) ~[netty-all.jar:4.1.0.CR7]
! at io.netty.buffer.PoolThreadCache$SubPageMemoryRegionCache.<init>(PoolThreadCache.java:340) ~[netty-all.jar:4.1.0.CR7]
! at io.netty.buffer.PoolThreadCache.createSubPageCaches(PoolThreadCache.java:135) ~[netty-all.jar:4.1.0.CR7]
! at io.netty.buffer.PoolThreadCache.<init>(PoolThreadCache.java:86) ~[netty-all.jar:4.1.0.CR7]
! at io.netty.buffer.PooledByteBufAllocator$PoolThreadLocalCache.initialValue(PooledByteBufAllocator.java:352) ~[netty-all.jar:4.1.0.CR7]
! at io.netty.buffer.PooledByteBufAllocator$PoolThreadLocalCache.initialValue(PooledByteBufAllocator.java:345) ~[netty-all.jar:4.1.0.CR7]
! at io.netty.util.concurrent.FastThreadLocal.initialize(FastThreadLocal.java:155) ~[netty-all.jar:4.1.0.CR7]
! at io.netty.util.concurrent.FastThreadLocal.get(FastThreadLocal.java:149) ~[netty-all.jar:4.1.0.CR7]
! at io.netty.util.concurrent.FastThreadLocal.get(FastThreadLocal.java:135) ~[netty-all.jar:4.1.0.CR7]
! at io.netty.buffer.PooledByteBufAllocator.newDirectBuffer(PooledByteBufAllocator.java:257) ~[netty-all.jar:4.1.0.CR7]
! at io.netty.buffer.AbstractByteBufAllocator.directBuffer(AbstractByteBufAllocator.java:179) ~[netty-all.jar:4.1.0.CR7]
! at io.netty.buffer.AbstractByteBufAllocator.buffer(AbstractByteBufAllocator.java:115) ~[netty-all.jar:4.1.0.CR7]
! at io.grpc.netty.NettyWritableBufferAllocator.allocate(NettyWritableBufferAllocator.java:66) ~[grpc-netty.jar:0.14.0]
! at io.grpc.internal.MessageFramer.writeKnownLength(MessageFramer.java:188) ~[grpc-core.jar:0.14.0]
! at io.grpc.internal.MessageFramer.writeUncompressed(MessageFramer.java:147) ~[grpc-core.jar:0.14.0]
! at io.grpc.internal.MessageFramer.writePayload(MessageFramer.java:124) ~[grpc-core.jar:0.14.0]
! at io.grpc.internal.AbstractStream.writeMessage(AbstractStream.java:172) ~[grpc-core.jar:0.14.0]
! at io.grpc.internal.AbstractServerStream.writeMessage(AbstractServerStream.java:110) ~[grpc-core.jar:0.14.0]
! at io.grpc.internal.ServerCallImpl.sendMessage(ServerCallImpl.java:157) ~[grpc-core.jar:0.14.0]

@carl-mastrangelo
Copy link
Contributor

I think this is a bug. The ClientCall and ServerCall classes should propagate such exceptions back to the listener.

@ejona86
Copy link
Member

ejona86 commented Sep 14, 2016

I don't think the line you linked to is it, because it only catches RuntimeException. I think it is in ServerCallImpl instead. If the error had been properly propagated from that point, then it would have propagated back to your application (which appears to have been sending a message at the time) and eventually passed back through SerializingExecutor which would have propagated it to the Executor/Thread.

@ejona86 ejona86 added the bug label Sep 14, 2016
@ejona86 ejona86 added this to the 1.1 milestone Sep 14, 2016
@ejona86 ejona86 added the TODO:backport PR needs to be backported. Removed after backport complete label Sep 14, 2016
@biran0079
Copy link
Author

Somehow the OutOfMemoryError is wrapped with a RuntimeException according to the stacktrace. Not sure who did it though.

@ejona86
Copy link
Member

ejona86 commented Sep 14, 2016

The link I provided in my comment shows the wrapping in RuntimeException.

@carl-mastrangelo
Copy link
Contributor

Looking at this again, it seems like this can happen if the ServerCallStreamObserver has a onReady handler added to it, since thats the only place that stack trace could feasibly come from.

@biran0079 Its my guess that yo uhave a onReady handler added somewhere, which is not catching exceptions. I am going to punt this to 1.2.

@carl-mastrangelo carl-mastrangelo modified the milestones: 1.2, 1.1 Jan 14, 2017
@biran0079
Copy link
Author

You are right. I probably used onReady handler to control response stream so that I don't overwhelm buffer pool.

@carl-mastrangelo
Copy link
Contributor

carl-mastrangelo commented Jan 14, 2017

The thing is, I don't see your handler in the stack trace. Did you remove it before pasting it? If so that's okay, but it would make me more comfortable closing this bug.

@carl-mastrangelo carl-mastrangelo modified the milestones: Next, 1.2 Mar 16, 2017
@carl-mastrangelo
Copy link
Contributor

I am closing this, but feel free to reopen if it is still happening.

@ejona86 ejona86 removed the TODO:backport PR needs to be backported. Removed after backport complete label Jun 29, 2017
@ejona86 ejona86 removed this from the Next milestone Jul 27, 2017
@lock lock bot locked as resolved and limited conversation to collaborators Sep 22, 2018
Sign up for free to subscribe to this conversation on GitHub. Already have an account? Sign in.
Labels
Projects
None yet
Development

No branches or pull requests

3 participants