Skip to content
New issue

Have a question about this project? Sign up for a free GitHub account to open an issue and contact its maintainers and the community.

By clicking “Sign up for GitHub”, you agree to our terms of service and privacy statement. We’ll occasionally send you account related emails.

Already on GitHub? Sign in to your account

OutOfDirectMemoryError when modify response body #1404

Closed
wangjunhua123 opened this issue Nov 13, 2019 · 25 comments
Closed

OutOfDirectMemoryError when modify response body #1404

wangjunhua123 opened this issue Nov 13, 2019 · 25 comments

Comments

@wangjunhua123
Copy link

wangjunhua123 commented Nov 13, 2019

Hi,

I want to report a problem.

Spring Cloud Greenwich.SR2
Spring Boot 2.1.6.RELEASE

When modify response body, allocated 16777216 byte(s) direct memory , but it's not release,cause OutOfDirectMemoryError.

Exception:
io.netty.util.internal.OutOfDirectMemoryError: failed to allocate 16777216 byte(s) of direct memory (used: 2046820359, max: 2058354688)
        at io.netty.util.internal.PlatformDependent.incrementMemoryCounter(PlatformDependent.java:667) ~[netty-common-4.1.36.Final.jar!/:4.1.36.Final]
        at io.netty.util.internal.PlatformDependent.allocateDirectNoCleaner(PlatformDependent.java:622) ~[netty-common-4.1.36.Final.jar!/:4.1.36.Final]
        at io.netty.buffer.PoolArena$DirectArena.allocateDirect(PoolArena.java:772) ~[netty-buffer-4.1.36.Final.jar!/:4.1.36.Final]
        at io.netty.buffer.PoolArena$DirectArena.newChunk(PoolArena.java:748) ~[netty-buffer-4.1.36.Final.jar!/:4.1.36.Final]
        at io.netty.buffer.PoolArena.allocateNormal(PoolArena.java:245) ~[netty-buffer-4.1.36.Final.jar!/:4.1.36.Final]
        at io.netty.buffer.PoolArena.allocate(PoolArena.java:215) ~[netty-buffer-4.1.36.Final.jar!/:4.1.36.Final]
        at io.netty.buffer.PoolArena.allocate(PoolArena.java:147) ~[netty-buffer-4.1.36.Final.jar!/:4.1.36.Final]
        at io.netty.buffer.PooledByteBufAllocator.newDirectBuffer(PooledByteBufAllocator.java:342) ~[netty-buffer-4.1.36.Final.jar!/:4.1.36.Final]
        at io.netty.buffer.AbstractByteBufAllocator.directBuffer(AbstractByteBufAllocator.java:187) ~[netty-buffer-4.1.36.Final.jar!/:4.1.36.Final]
        at io.netty.buffer.AbstractByteBufAllocator.directBuffer(AbstractByteBufAllocator.java:178) ~[netty-buffer-4.1.36.Final.jar!/:4.1.36.Final]
        at io.netty.buffer.AbstractByteBufAllocator.ioBuffer(AbstractByteBufAllocator.java:139) ~[netty-buffer-4.1.36.Final.jar!/:4.1.36.Final]
        at io.netty.channel.DefaultMaxMessagesRecvByteBufAllocator$MaxMessageHandle.allocate(DefaultMaxMessagesRecvByteBufAllocator.java:114) ~[netty-transport-4.1.36.Final.jar!/:4.1.36.Final]
        at io.netty.channel.nio.AbstractNioByteChannel$NioByteUnsafe.read(AbstractNioByteChannel.java:147) [netty-transport-4.1.36.Final.jar!/:4.1.36.Final]
        at io.netty.channel.nio.NioEventLoop.processSelectedKey(NioEventLoop.java:682) [netty-transport-4.1.36.Final.jar!/:4.1.36.Final]
        at io.netty.channel.nio.NioEventLoop.processSelectedKeysOptimized(NioEventLoop.java:617) [netty-transport-4.1.36.Final.jar!/:4.1.36.Final]
        at io.netty.channel.nio.NioEventLoop.processSelectedKeys(NioEventLoop.java:534) [netty-transport-4.1.36.Final.jar!/:4.1.36.Final]
        at io.netty.channel.nio.NioEventLoop.run(NioEventLoop.java:496) [netty-transport-4.1.36.Final.jar!/:4.1.36.Final]
        at io.netty.util.concurrent.SingleThreadEventExecutor$5.run(SingleThreadEventExecutor.java:906) [netty-common-4.1.36.Final.jar!/:4.1.36.Final]
        at io.netty.util.internal.ThreadExecutorMap$2.run(ThreadExecutorMap.java:74) [netty-common-4.1.36.Final.jar!/:4.1.36.Final]
        at io.netty.util.concurrent.FastThreadLocalRunnable.run(FastThreadLocalRunnable.java:30) [netty-common-4.1.36.Final.jar!/:4.1.36.Final]
        at java.lang.Thread.run(Thread.java:745) [?:1.8.0_65]

The test found that the code for increasing direct memory is as follows:
org.springframework.cloud.gateway.filter.factory.rewrite.ModifyResponseBodyGatewayFilterFactory:

return bodyInserter.insert(outputMessage, new BodyInserterContext())
							.then(Mono.defer(() -> {
								Flux<DataBuffer> messageBody = outputMessage.getBody();
								HttpHeaders headers = getDelegate().getHeaders();
								if (!headers.containsKey(HttpHeaders.TRANSFER_ENCODING)) {
									messageBody = messageBody.doOnNext(data -> headers
											.setContentLength(data.readableByteCount()));
								}
								// TODO: fail if isStreamingMediaType?
								return getDelegate().writeWith(messageBody);
							}));

Direct memory increase code:
org.springframework.http.codec.EncoderHttpMessageWriter:

if (inputStream instanceof Mono) {
			HttpHeaders headers = message.getHeaders();
			return body
					.singleOrEmpty()
					.switchIfEmpty(Mono.defer(() -> {
						headers.setContentLength(0);
						return message.setComplete().then(Mono.empty());
					}))
					.flatMap(buffer -> {
						headers.setContentLength(buffer.readableByteCount());
						return message.writeWith(Mono.just(buffer)
								.doOnDiscard(PooledDataBuffer.class, PooledDataBuffer::release));
					});
		}


Can you help me to solve this problem.
Thank you!
@wangjunhua123 wangjunhua123 changed the title EncoderHttpMessageWriter OutOfDirectMemoryError when modify response body Nov 13, 2019
@spencergibb
Copy link
Member

Can you please retry with boot 2.1.11 and spring cloud Greenwich.SR4 or boot 2.2.2 and cloud Hoxton.SR1?

@wangjunhua123
Copy link
Author

The production environment has this problem. It is difficult to retry. I checked the code of boot 2.1.11 and spring cloud Greenwich.SR4 and boot 2.2.2 and cloud Hoxton.SR1, but the code with problems is the same.

@spencergibb
Copy link
Member

@wangjunhua123 did you try with Hoxton or just look at the code? Changes were made in reactor and framework as well. So boot 2.2.6 and Hoxton.SR3 are the latest.

@gihad
Copy link

gihad commented Apr 10, 2020

@wangjunhua123 @spencergibb Any updates on this? We are also experiencing the gateway allocating Direct Memory indefinitely until this error occurs

@FangXiaoMing2021
Copy link
Contributor

FangXiaoMing2021 commented Apr 11, 2020

@gihad Did you change the response body or the request body? I found, in some cases, changed request body will lead to memory leak, but change response body, i didn't find. #1520

@spencergibb
Copy link
Member

@wangjunhua123 @gihad did either of you try with boot 2.2.6 and Hoxton.SR3?

Can anyone provide steps to reproduce it (size of response, how many requests over how long?)

@gihad
Copy link

gihad commented Apr 13, 2020

@spencergibb Yes, we bumped version to 2.2.6.RELEASE and Hoxton.SR3 a few days ago after seeing your comment. We saw some improved memory usage after that but still dealing with this issue.

We recently deployed our spring-cloud-gateway based app that we call "Portal V2" which is the API Gateway for a service oriented architecture with a few dozens of different services behind it. It was made as a direct replacement to our previous API gateway which used Zuul 1, "Portal V1".

We see the amount of direct memory usage (which by default is equal to the Java heap as per the Xmx setting) go up very fast after deploy to close to the MAX and than slowly continue up over time until it hits the max and those exceptions start to happen. The same thing happens when giving it very large memory limits, it just takes longer.

We can see this behavior at around 650 req/second. The requests that go through this API Gateway can be very different from one another. Most are lightweight requests with small responses but some can have large bodies (e.g.: 460Kb gzipped). We have several filters and do all sorts of things from the gateway like request composition using the Webclient, hydration of requests, and so on.

Here are some containers reaching max direct memory usage quickly, when they hit the top we start getting those exceptions and therefore requests start to fail:

image

After tinkering with this for the last 3 days my hunch is that it's more to do with netty settings that need to be properly tweaked for our API Gateway usage patterns. Our team is still experimenting and trying to find the root cause. cc @ukayani

We have a very large TV featuring for one of our apps coming up so we will be working on this a lot this week. Thanks.

@spencergibb
Copy link
Member

What percentage of those requests modify the response body? In all my talks when I talk about modifying bodies is that the number of concurrent requests modifying a body * size of the body the minimum amount of memory needed (likely more).

@spring-projects-issues
Copy link

If you would like us to look at this issue, please provide the requested information. If the information is not provided within the next 7 days this issue will be closed.

@gihad
Copy link

gihad commented Apr 21, 2020

We were able to contain the growth of native memory to a much more reasonable level (now it grows at a slow pace and it even stabilizes). The relationship between native memory usage and number of concurrent requests is more direct and predictable and that's the main factor now in estimating how much memory it needs.

The main 2 properties that made the different were:

-Dio.netty.allocator.numDirectArenas=1 
-Dio.netty.allocator.maxOrder=7

We saw some of these values were being used as defaults in Twitter's finagle and just copied the to start.

In practice (e.g.: avg latency) we didn't experience any performance issues after setting those properties.

@LeungKitSam
Copy link

@spencergibb I tried to set the two properties that @gihad mentioned and -XX:MaxDirectMemorySize=256m,but it seemed not working.I tested with 1000 threads and got the below exception:

io.netty.util.internal.OutOfDirectMemoryError: failed to allocate 1048576 byte(s) of direct memory (used: 267386887, max: 268435456)
at io.netty.util.internal.PlatformDependent.incrementMemoryCounter(PlatformDependent.java:667) ~[netty-common-4.1.36.Final.jar!/:4.1.36.Final]
at io.netty.util.internal.PlatformDependent.allocateDirectNoCleaner(PlatformDependent.java:622) ~[netty-common-4.1.36.Final.jar!/:4.1.36.Final]
at io.netty.buffer.PoolArena$DirectArena.allocateDirect(PoolArena.java:772) ~[netty-buffer-4.1.36.Final.jar!/:4.1.36.Final]
at io.netty.buffer.PoolArena$DirectArena.newChunk(PoolArena.java:748) ~[netty-buffer-4.1.36.Final.jar!/:4.1.36.Final]
at io.netty.buffer.PoolArena.allocateNormal(PoolArena.java:245) ~[netty-buffer-4.1.36.Final.jar!/:4.1.36.Final]
at io.netty.buffer.PoolArena.allocate(PoolArena.java:227) ~[netty-buffer-4.1.36.Final.jar!/:4.1.36.Final]
at io.netty.buffer.PoolArena.allocate(PoolArena.java:147) ~[netty-buffer-4.1.36.Final.jar!/:4.1.36.Final]
at io.netty.buffer.PooledByteBufAllocator.newDirectBuffer(PooledByteBufAllocator.java:342) ~[netty-buffer-4.1.36.Final.jar!/:4.1.36.Final]
at io.netty.buffer.AbstractByteBufAllocator.directBuffer(AbstractByteBufAllocator.java:187) ~[netty-buffer-4.1.36.Final.jar!/:4.1.36.Final]
at io.netty.buffer.AbstractByteBufAllocator.directBuffer(AbstractByteBufAllocator.java:178) ~[netty-buffer-4.1.36.Final.jar!/:4.1.36.Final]
at io.netty.buffer.AbstractByteBufAllocator.ioBuffer(AbstractByteBufAllocator.java:139) ~[netty-buffer-4.1.36.Final.jar!/:4.1.36.Final]
at io.netty.channel.DefaultMaxMessagesRecvByteBufAllocator$MaxMessageHandle.allocate(DefaultMaxMessagesRecvByteBufAllocator.java:114) ~[netty-transport-4.1.36.Final.jar!/:4.1.36.Final]
at io.netty.channel.nio.AbstractNioByteChannel$NioByteUnsafe.read(AbstractNioByteChannel.java:147) [netty-transport-4.1.36.Final.jar!/:4.1.36.Final]
at io.netty.channel.nio.NioEventLoop.processSelectedKey(NioEventLoop.java:682) [netty-transport-4.1.36.Final.jar!/:4.1.36.Final]
at io.netty.channel.nio.NioEventLoop.processSelectedKeysOptimized(NioEventLoop.java:617) [netty-transport-4.1.36.Final.jar!/:4.1.36.Final]
at io.netty.channel.nio.NioEventLoop.processSelectedKeys(NioEventLoop.java:534) [netty-transport-4.1.36.Final.jar!/:4.1.36.Final]
at io.netty.channel.nio.NioEventLoop.run(NioEventLoop.java:496) [netty-transport-4.1.36.Final.jar!/:4.1.36.Final]
at io.netty.util.concurrent.SingleThreadEventExecutor$5.run(SingleThreadEventExecutor.java:906) [netty-common-4.1.36.Final.jar!/:4.1.36.Final]
at io.netty.util.internal.ThreadExecutorMap$2.run(ThreadExecutorMap.java:74) [netty-common-4.1.36.Final.jar!/:4.1.36.Final]
at io.netty.util.concurrent.FastThreadLocalRunnable.run(FastThreadLocalRunnable.java:30) [netty-common-4.1.36.Final.jar!/:4.1.36.Final]
at java.lang.Thread.run(Thread.java:748) [?:1.8.0_131]

The direct memory kept increasing untill I stop the test but wont release.

Spring Cloud Greenwich.SR2
Spring Boot 2.1.6.RELEASE
Netty 4.1.36.Final

@wangjunhua123
Copy link
Author

@spencergibb Sorry, I changed my job, this issue will be followed up by my colleague in the future, I can also communicate together, thank you!

@LeungKitSam
Copy link

update a leak detection log:

2020-06-11 10:56:05,950 2355163 [nioEventLoopGroup-3-8] ERROR io.netty.util.ResourceLeakDetector - LEAK: ByteBuf.release() was not called before it's garbage-collected. See http://netty.io/wiki/reference-counted-objects.html for more information.
Recent access records:
#1:
io.netty.handler.codec.ByteToMessageDecoder.channelRead(Unknown Source)
io.netty.channel.CombinedChannelDuplexHandler.channelRead(CombinedChannelDuplexHandler.java:253)
io.netty.channel.AbstractChannelHandlerContext.invokeChannelRead(AbstractChannelHandlerContext.java:374)
io.netty.channel.AbstractChannelHandlerContext.invokeChannelRead(AbstractChannelHandlerContext.java:360)
io.netty.channel.AbstractChannelHandlerContext.fireChannelRead(AbstractChannelHandlerContext.java:352)
io.netty.channel.DefaultChannelPipeline$HeadContext.channelRead(DefaultChannelPipeline.java:1408)
io.netty.channel.AbstractChannelHandlerContext.invokeChannelRead(AbstractChannelHandlerContext.java:374)
io.netty.channel.AbstractChannelHandlerContext.invokeChannelRead(AbstractChannelHandlerContext.java:360)
io.netty.channel.DefaultChannelPipeline.fireChannelRead(DefaultChannelPipeline.java:930)
io.netty.channel.nio.AbstractNioByteChannel$NioByteUnsafe.read(AbstractNioByteChannel.java:163)
io.netty.channel.nio.NioEventLoop.processSelectedKey(NioEventLoop.java:682)
io.netty.channel.nio.NioEventLoop.processSelectedKeysOptimized(NioEventLoop.java:617)
io.netty.channel.nio.NioEventLoop.processSelectedKeys(NioEventLoop.java:534)
io.netty.channel.nio.NioEventLoop.run(NioEventLoop.java:496)
io.netty.util.concurrent.SingleThreadEventExecutor$5.run(SingleThreadEventExecutor.java:906)
io.netty.util.internal.ThreadExecutorMap$2.run(ThreadExecutorMap.java:74)
io.netty.util.concurrent.FastThreadLocalRunnable.run(FastThreadLocalRunnable.java:30)
java.lang.Thread.run(Thread.java:748)
#2:
io.netty.buffer.AdvancedLeakAwareByteBuf.forEachByte(AdvancedLeakAwareByteBuf.java:670)
io.netty.handler.codec.http.HttpObjectDecoder$HeaderParser.parse(HttpObjectDecoder.java:793)
io.netty.handler.codec.http.HttpObjectDecoder.readHeaders(HttpObjectDecoder.java:592)
io.netty.handler.codec.http.HttpObjectDecoder.decode(HttpObjectDecoder.java:218)
io.netty.handler.codec.http.HttpServerCodec$HttpServerRequestDecoder.decode(HttpServerCodec.java:103)
io.netty.handler.codec.ByteToMessageDecoder.decodeRemovalReentryProtection(Unknown Source)
io.netty.handler.codec.ByteToMessageDecoder.callDecode(Unknown Source)
io.netty.handler.codec.ByteToMessageDecoder.channelRead(Unknown Source)
io.netty.channel.CombinedChannelDuplexHandler.channelRead(CombinedChannelDuplexHandler.java:253)
io.netty.channel.AbstractChannelHandlerContext.invokeChannelRead(AbstractChannelHandlerContext.java:374)
io.netty.channel.AbstractChannelHandlerContext.invokeChannelRead(AbstractChannelHandlerContext.java:360)
io.netty.channel.AbstractChannelHandlerContext.fireChannelRead(AbstractChannelHandlerContext.java:352)
io.netty.channel.DefaultChannelPipeline$HeadContext.channelRead(DefaultChannelPipeline.java:1408)
io.netty.channel.AbstractChannelHandlerContext.invokeChannelRead(AbstractChannelHandlerContext.java:374)
io.netty.channel.AbstractChannelHandlerContext.invokeChannelRead(AbstractChannelHandlerContext.java:360)
io.netty.channel.DefaultChannelPipeline.fireChannelRead(DefaultChannelPipeline.java:930)
io.netty.channel.nio.AbstractNioByteChannel$NioByteUnsafe.read(AbstractNioByteChannel.java:163)
io.netty.channel.nio.NioEventLoop.processSelectedKey(NioEventLoop.java:682)
io.netty.channel.nio.NioEventLoop.processSelectedKeysOptimized(NioEventLoop.java:617)
io.netty.channel.nio.NioEventLoop.processSelectedKeys(NioEventLoop.java:534)
io.netty.channel.nio.NioEventLoop.run(NioEventLoop.java:496)
io.netty.util.concurrent.SingleThreadEventExecutor$5.run(SingleThreadEventExecutor.java:906)
io.netty.util.internal.ThreadExecutorMap$2.run(ThreadExecutorMap.java:74)
io.netty.util.concurrent.FastThreadLocalRunnable.run(FastThreadLocalRunnable.java:30)
java.lang.Thread.run(Thread.java:748)
#3:
io.netty.buffer.AdvancedLeakAwareByteBuf.forEachByte(AdvancedLeakAwareByteBuf.java:670)
io.netty.handler.codec.http.HttpObjectDecoder$HeaderParser.parse(HttpObjectDecoder.java:793)
io.netty.handler.codec.http.HttpObjectDecoder$LineParser.parse(HttpObjectDecoder.java:842)
io.netty.handler.codec.http.HttpObjectDecoder.decode(HttpObjectDecoder.java:199)
io.netty.handler.codec.http.HttpServerCodec$HttpServerRequestDecoder.decode(HttpServerCodec.java:103)
io.netty.handler.codec.ByteToMessageDecoder.decodeRemovalReentryProtection(Unknown Source)
io.netty.handler.codec.ByteToMessageDecoder.callDecode(Unknown Source)
io.netty.handler.codec.ByteToMessageDecoder.channelRead(Unknown Source)
io.netty.channel.CombinedChannelDuplexHandler.channelRead(CombinedChannelDuplexHandler.java:253)
io.netty.channel.AbstractChannelHandlerContext.invokeChannelRead(AbstractChannelHandlerContext.java:374)
io.netty.channel.AbstractChannelHandlerContext.invokeChannelRead(AbstractChannelHandlerContext.java:360)
io.netty.channel.AbstractChannelHandlerContext.fireChannelRead(AbstractChannelHandlerContext.java:352)
io.netty.channel.DefaultChannelPipeline$HeadContext.channelRead(DefaultChannelPipeline.java:1408)
io.netty.channel.AbstractChannelHandlerContext.invokeChannelRead(AbstractChannelHandlerContext.java:374)
io.netty.channel.AbstractChannelHandlerContext.invokeChannelRead(AbstractChannelHandlerContext.java:360)
io.netty.channel.DefaultChannelPipeline.fireChannelRead(DefaultChannelPipeline.java:930)
io.netty.channel.nio.AbstractNioByteChannel$NioByteUnsafe.read(AbstractNioByteChannel.java:163)
io.netty.channel.nio.NioEventLoop.processSelectedKey(NioEventLoop.java:682)
io.netty.channel.nio.NioEventLoop.processSelectedKeysOptimized(NioEventLoop.java:617)
io.netty.channel.nio.NioEventLoop.processSelectedKeys(NioEventLoop.java:534)
io.netty.channel.nio.NioEventLoop.run(NioEventLoop.java:496)
io.netty.util.concurrent.SingleThreadEventExecutor$5.run(SingleThreadEventExecutor.java:906)
io.netty.util.internal.ThreadExecutorMap$2.run(ThreadExecutorMap.java:74)
io.netty.util.concurrent.FastThreadLocalRunnable.run(FastThreadLocalRunnable.java:30)
java.lang.Thread.run(Thread.java:748)
#4:
io.netty.buffer.AdvancedLeakAwareByteBuf.getUnsignedByte(AdvancedLeakAwareByteBuf.java:160)
io.netty.handler.codec.http.HttpObjectDecoder.skipControlCharacters(HttpObjectDecoder.java:557)
io.netty.handler.codec.http.HttpObjectDecoder.decode(HttpObjectDecoder.java:193)
io.netty.handler.codec.http.HttpServerCodec$HttpServerRequestDecoder.decode(HttpServerCodec.java:103)
io.netty.handler.codec.ByteToMessageDecoder.decodeRemovalReentryProtection(Unknown Source)
io.netty.handler.codec.ByteToMessageDecoder.callDecode(Unknown Source)
io.netty.handler.codec.ByteToMessageDecoder.channelRead(Unknown Source)
io.netty.channel.CombinedChannelDuplexHandler.channelRead(CombinedChannelDuplexHandler.java:253)
io.netty.channel.AbstractChannelHandlerContext.invokeChannelRead(AbstractChannelHandlerContext.java:374)
io.netty.channel.AbstractChannelHandlerContext.invokeChannelRead(AbstractChannelHandlerContext.java:360)
io.netty.channel.AbstractChannelHandlerContext.fireChannelRead(AbstractChannelHandlerContext.java:352)
io.netty.channel.DefaultChannelPipeline$HeadContext.channelRead(DefaultChannelPipeline.java:1408)
io.netty.channel.AbstractChannelHandlerContext.invokeChannelRead(AbstractChannelHandlerContext.java:374)
io.netty.channel.AbstractChannelHandlerContext.invokeChannelRead(AbstractChannelHandlerContext.java:360)
io.netty.channel.DefaultChannelPipeline.fireChannelRead(DefaultChannelPipeline.java:930)
io.netty.channel.nio.AbstractNioByteChannel$NioByteUnsafe.read(AbstractNioByteChannel.java:163)
io.netty.channel.nio.NioEventLoop.processSelectedKey(NioEventLoop.java:682)
io.netty.channel.nio.NioEventLoop.processSelectedKeysOptimized(NioEventLoop.java:617)
io.netty.channel.nio.NioEventLoop.processSelectedKeys(NioEventLoop.java:534)
io.netty.channel.nio.NioEventLoop.run(NioEventLoop.java:496)
io.netty.util.concurrent.SingleThreadEventExecutor$5.run(SingleThreadEventExecutor.java:906)
io.netty.util.internal.ThreadExecutorMap$2.run(ThreadExecutorMap.java:74)
io.netty.util.concurrent.FastThreadLocalRunnable.run(FastThreadLocalRunnable.java:30)
java.lang.Thread.run(Thread.java:748)
#5:
Hint: 'reactor.left.httpCodec' will handle the message from this point.
io.netty.channel.DefaultChannelPipeline.touch(DefaultChannelPipeline.java:116)
io.netty.channel.AbstractChannelHandlerContext.invokeChannelRead(AbstractChannelHandlerContext.java:357)
io.netty.channel.AbstractChannelHandlerContext.fireChannelRead(AbstractChannelHandlerContext.java:352)
io.netty.channel.DefaultChannelPipeline$HeadContext.channelRead(DefaultChannelPipeline.java:1408)
io.netty.channel.AbstractChannelHandlerContext.invokeChannelRead(AbstractChannelHandlerContext.java:374)
io.netty.channel.AbstractChannelHandlerContext.invokeChannelRead(AbstractChannelHandlerContext.java:360)
io.netty.channel.DefaultChannelPipeline.fireChannelRead(DefaultChannelPipeline.java:930)
io.netty.channel.nio.AbstractNioByteChannel$NioByteUnsafe.read(AbstractNioByteChannel.java:163)
io.netty.channel.nio.NioEventLoop.processSelectedKey(NioEventLoop.java:682)
io.netty.channel.nio.NioEventLoop.processSelectedKeysOptimized(NioEventLoop.java:617)
io.netty.channel.nio.NioEventLoop.processSelectedKeys(NioEventLoop.java:534)
io.netty.channel.nio.NioEventLoop.run(NioEventLoop.java:496)
io.netty.util.concurrent.SingleThreadEventExecutor$5.run(SingleThreadEventExecutor.java:906)
io.netty.util.internal.ThreadExecutorMap$2.run(ThreadExecutorMap.java:74)
io.netty.util.concurrent.FastThreadLocalRunnable.run(FastThreadLocalRunnable.java:30)
java.lang.Thread.run(Thread.java:748)
#6:
Hint: 'DefaultChannelPipeline$HeadContext#0' will handle the message from this point.
io.netty.channel.DefaultChannelPipeline.touch(DefaultChannelPipeline.java:116)
io.netty.channel.AbstractChannelHandlerContext.invokeChannelRead(AbstractChannelHandlerContext.java:357)
io.netty.channel.DefaultChannelPipeline.fireChannelRead(DefaultChannelPipeline.java:930)
io.netty.channel.nio.AbstractNioByteChannel$NioByteUnsafe.read(AbstractNioByteChannel.java:163)
io.netty.channel.nio.NioEventLoop.processSelectedKey(NioEventLoop.java:682)
io.netty.channel.nio.NioEventLoop.processSelectedKeysOptimized(NioEventLoop.java:617)
io.netty.channel.nio.NioEventLoop.processSelectedKeys(NioEventLoop.java:534)
io.netty.channel.nio.NioEventLoop.run(NioEventLoop.java:496)
io.netty.util.concurrent.SingleThreadEventExecutor$5.run(SingleThreadEventExecutor.java:906)
io.netty.util.internal.ThreadExecutorMap$2.run(ThreadExecutorMap.java:74)
io.netty.util.concurrent.FastThreadLocalRunnable.run(FastThreadLocalRunnable.java:30)
java.lang.Thread.run(Thread.java:748)
#7:
io.netty.buffer.AdvancedLeakAwareByteBuf.writeBytes(AdvancedLeakAwareByteBuf.java:634)
io.netty.channel.socket.nio.NioSocketChannel.doReadBytes(NioSocketChannel.java:347)
io.netty.channel.nio.AbstractNioByteChannel$NioByteUnsafe.read(AbstractNioByteChannel.java:148)
io.netty.channel.nio.NioEventLoop.processSelectedKey(NioEventLoop.java:682)
io.netty.channel.nio.NioEventLoop.processSelectedKeysOptimized(NioEventLoop.java:617)
io.netty.channel.nio.NioEventLoop.processSelectedKeys(NioEventLoop.java:534)
io.netty.channel.nio.NioEventLoop.run(NioEventLoop.java:496)
io.netty.util.concurrent.SingleThreadEventExecutor$5.run(SingleThreadEventExecutor.java:906)
io.netty.util.internal.ThreadExecutorMap$2.run(ThreadExecutorMap.java:74)
io.netty.util.concurrent.FastThreadLocalRunnable.run(FastThreadLocalRunnable.java:30)
java.lang.Thread.run(Thread.java:748)
Created at:
io.netty.buffer.PooledByteBufAllocator.newDirectBuffer(PooledByteBufAllocator.java:349)
io.netty.buffer.AbstractByteBufAllocator.directBuffer(AbstractByteBufAllocator.java:187)
io.netty.buffer.AbstractByteBufAllocator.directBuffer(AbstractByteBufAllocator.java:178)
io.netty.buffer.AbstractByteBufAllocator.ioBuffer(AbstractByteBufAllocator.java:139)
io.netty.channel.DefaultMaxMessagesRecvByteBufAllocator$MaxMessageHandle.allocate(DefaultMaxMessagesRecvByteBufAllocator.java:114)
io.netty.channel.nio.AbstractNioByteChannel$NioByteUnsafe.read(AbstractNioByteChannel.java:147)
io.netty.channel.nio.NioEventLoop.processSelectedKey(NioEventLoop.java:682)
io.netty.channel.nio.NioEventLoop.processSelectedKeysOptimized(NioEventLoop.java:617)
io.netty.channel.nio.NioEventLoop.processSelectedKeys(NioEventLoop.java:534)
io.netty.channel.nio.NioEventLoop.run(NioEventLoop.java:496)
io.netty.util.concurrent.SingleThreadEventExecutor$5.run(SingleThreadEventExecutor.java:906)
io.netty.util.internal.ThreadExecutorMap$2.run(ThreadExecutorMap.java:74)
io.netty.util.concurrent.FastThreadLocalRunnable.run(FastThreadLocalRunnable.java:30)
java.lang.Thread.run(Thread.java:748)
: 7 leak records were discarded because the leak record count is targeted to 4. Use system property io.netty.leakDetection.targetRecords to increase the limit.

@Xpenf
Copy link

Xpenf commented Jul 2, 2020

spring boot 2.2.6
spring cloud hoxton.SR3
spring gateway 2.2.2
I have encountered the same problem in the above version, please help me to see.

`@Override
public Mono writeWith(Publisher<? extends DataBuffer> body) {
if (getStatusCode().equals(HttpStatus.OK) && body instanceof Flux) {
String originalResponseContentType = exchange.getAttribute(ORIGINAL_RESPONSE_CONTENT_TYPE_ATTR);
if (StringUtils.isNotEmpty(originalResponseContentType) && originalResponseContentType.contains("application/json")) {
Flux<? extends DataBuffer> fluxBody = Flux.from(body);

                    return super.writeWith(fluxBody.buffer().map(dataBuffers -> {
                        List<String> list = Lists.newArrayList();
                        dataBuffers.forEach(dataBuffer -> {
                            try {
                                byte[] content = new byte[dataBuffer.readableByteCount()];
                                dataBuffer.read(content);
                                DataBufferUtils.release(dataBuffer);
                                list.add(new String(content, Charset.forName("UTF-8")));
                            } catch (Exception e) {
                                LoggerFactory.getLogger(LogCst.GATEWAY_DEFAULT).info("动态加载API加密规则失败,失败原因:{}", e);
                            }
                        });
                        String text = StringUtil.listToString(list);

                        // 语言转换
                        String content = languageConvert(text, exchange);
                        // log.info("修改响应体:\n修改前: {}\n修改后: {}", text, content);
                        // 记录日志
                        recordLog(content);

                        originalResponse.getHeaders().setContentLength(content.getBytes().length);
                        return bufferFactory.wrap(content.getBytes());
                    }));
                }
            }
            // if body is not a flux. never got there.
            return super.writeWith(body);
        }`

ERROR 2020-07-02 18:56:02,131 [reactor-http-nio-1] io.netty.util.ResourceLeakDetector - LEAK: ByteBuf.release() was not called before it's garbage-collected. See https://netty.io/wiki/reference-counted-objects.html for more information. Recent access records: Created at: io.netty.buffer.PooledByteBufAllocator.newDirectBuffer(PooledByteBufAllocator.java:363) io.netty.buffer.AbstractByteBufAllocator.directBuffer(AbstractByteBufAllocator.java:187) io.netty.buffer.AbstractByteBufAllocator.directBuffer(AbstractByteBufAllocator.java:178) io.netty.buffer.AbstractByteBufAllocator.ioBuffer(AbstractByteBufAllocator.java:139) io.netty.channel.DefaultMaxMessagesRecvByteBufAllocator$MaxMessageHandle.allocate(DefaultMaxMessagesRecvByteBufAllocator.java:114) io.netty.channel.nio.AbstractNioByteChannel$NioByteUnsafe.read(AbstractNioByteChannel.java:147) io.netty.channel.nio.NioEventLoop.processSelectedKey(NioEventLoop.java:714) io.netty.channel.nio.NioEventLoop.processSelectedKeysOptimized(NioEventLoop.java:650) io.netty.channel.nio.NioEventLoop.processSelectedKeys(NioEventLoop.java:576) io.netty.channel.nio.NioEventLoop.run(NioEventLoop.java:493) io.netty.util.concurrent.SingleThreadEventExecutor$4.run(SingleThreadEventExecutor.java:989) io.netty.util.internal.ThreadExecutorMap$2.run(ThreadExecutorMap.java:74) io.netty.util.concurrent.FastThreadLocalRunnable.run(FastThreadLocalRunnable.java:30) java.lang.Thread.run(Thread.java:748) WARN 2020-07-02 18:57:51,424 [reactor-http-nio-8] r.n.http.client.HttpClientConnect - [id: 0x9a73a5db, L:0.0.0.0/0.0.0.0:52204 ! R:/172.16.0.34:9003] The connection observed an error reactor.netty.ReactorNetty$InternalNettyException: io.netty.util.internal.OutOfDirectMemoryError: failed to allocate 8192 byte(s) of direct memory (used: 26212359, max: 26214400) Caused by: io.netty.util.internal.OutOfDirectMemoryError: failed to allocate 8192 byte(s) of direct memory (used: 26212359, max: 26214400) at io.netty.util.internal.PlatformDependent.incrementMemoryCounter(PlatformDependent.java:742) at io.netty.util.internal.PlatformDependent.allocateDirectNoCleaner(PlatformDependent.java:697) at io.netty.buffer.UnpooledUnsafeNoCleanerDirectByteBuf.allocateDirect(UnpooledUnsafeNoCleanerDirectByteBuf.java:30) at io.netty.buffer.UnpooledDirectByteBuf.<init>(UnpooledDirectByteBuf.java:64) at io.netty.buffer.UnpooledUnsafeDirectByteBuf.<init>(UnpooledUnsafeDirectByteBuf.java:41) at io.netty.buffer.UnpooledUnsafeNoCleanerDirectByteBuf.<init>(UnpooledUnsafeNoCleanerDirectByteBuf.java:25) at io.netty.buffer.UnsafeByteBufUtil.newUnsafeDirectByteBuf(UnsafeByteBufUtil.java:625) at io.netty.buffer.PooledByteBufAllocator.newDirectBuffer(PooledByteBufAllocator.java:359) at io.netty.buffer.AbstractByteBufAllocator.directBuffer(AbstractByteBufAllocator.java:187) at io.netty.buffer.AbstractByteBufAllocator.directBuffer(AbstractByteBufAllocator.java:178) at io.netty.buffer.AbstractByteBufAllocator.ioBuffer(AbstractByteBufAllocator.java:139) at io.netty.channel.DefaultMaxMessagesRecvByteBufAllocator$MaxMessageHandle.allocate(DefaultMaxMessagesRecvByteBufAllocator.java:114) at io.netty.channel.nio.AbstractNioByteChannel$NioByteUnsafe.read(AbstractNioByteChannel.java:147) at io.netty.channel.nio.NioEventLoop.processSelectedKey(NioEventLoop.java:714) at io.netty.channel.nio.NioEventLoop.processSelectedKeysOptimized(NioEventLoop.java:650) at io.netty.channel.nio.NioEventLoop.processSelectedKeys(NioEventLoop.java:576) at io.netty.channel.nio.NioEventLoop.run(NioEventLoop.java:493) at io.netty.util.concurrent.SingleThreadEventExecutor$4.run(SingleThreadEventExecutor.java:989) at io.netty.util.internal.ThreadExecutorMap$2.run(ThreadExecutorMap.java:74) at io.netty.util.concurrent.FastThreadLocalRunnable.run(FastThreadLocalRunnable.java:30) at java.lang.Thread.run(Thread.java:748) ERROR 2020-07-02 18:57:51,610 [reactor-http-nio-8] o.s.b.a.w.r.e.AbstractErrorWebExceptionHandler - [3b6141be] 500 Server Error for HTTP GET "/api/common/configs" reactor.netty.ReactorNetty$InternalNettyException: io.netty.util.internal.OutOfDirectMemoryError: failed to allocate 8192 byte(s) of direct memory (used: 26212359, max: 26214400) Suppressed: reactor.core.publisher.FluxOnAssembly$OnAssemblyException: Error has been observed at the following site(s): |_ checkpoint ⇢ org.springframework.cloud.gateway.filter.WeightCalculatorWebFilter [DefaultWebFilterChain] |_ checkpoint ⇢ com.brezze.share.gateway.filter.CORSFilter [DefaultWebFilterChain] |_ checkpoint ⇢ HTTP GET "/api/common/configs" [ExceptionHandlingWebHandler] Stack trace: Caused by: io.netty.util.internal.OutOfDirectMemoryError: failed to allocate 8192 byte(s) of direct memory (used: 26212359, max: 26214400) at io.netty.util.internal.PlatformDependent.incrementMemoryCounter(PlatformDependent.java:742) at io.netty.util.internal.PlatformDependent.allocateDirectNoCleaner(PlatformDependent.java:697) at io.netty.buffer.UnpooledUnsafeNoCleanerDirectByteBuf.allocateDirect(UnpooledUnsafeNoCleanerDirectByteBuf.java:30) at io.netty.buffer.UnpooledDirectByteBuf.<init>(UnpooledDirectByteBuf.java:64) at io.netty.buffer.UnpooledUnsafeDirectByteBuf.<init>(UnpooledUnsafeDirectByteBuf.java:41) at io.netty.buffer.UnpooledUnsafeNoCleanerDirectByteBuf.<init>(UnpooledUnsafeNoCleanerDirectByteBuf.java:25) at io.netty.buffer.UnsafeByteBufUtil.newUnsafeDirectByteBuf(UnsafeByteBufUtil.java:625) at io.netty.buffer.PooledByteBufAllocator.newDirectBuffer(PooledByteBufAllocator.java:359) at io.netty.buffer.AbstractByteBufAllocator.directBuffer(AbstractByteBufAllocator.java:187) at io.netty.buffer.AbstractByteBufAllocator.directBuffer(AbstractByteBufAllocator.java:178) at io.netty.buffer.AbstractByteBufAllocator.ioBuffer(AbstractByteBufAllocator.java:139) at io.netty.channel.DefaultMaxMessagesRecvByteBufAllocator$MaxMessageHandle.allocate(DefaultMaxMessagesRecvByteBufAllocator.java:114) at io.netty.channel.nio.AbstractNioByteChannel$NioByteUnsafe.read(AbstractNioByteChannel.java:147) at io.netty.channel.nio.NioEventLoop.processSelectedKey(NioEventLoop.java:714) at io.netty.channel.nio.NioEventLoop.processSelectedKeysOptimized(NioEventLoop.java:650) at io.netty.channel.nio.NioEventLoop.processSelectedKeys(NioEventLoop.java:576) at io.netty.channel.nio.NioEventLoop.run(NioEventLoop.java:493) at io.netty.util.concurrent.SingleThreadEventExecutor$4.run(SingleThreadEventExecutor.java:989) at io.netty.util.internal.ThreadExecutorMap$2.run(ThreadExecutorMap.java:74) at io.netty.util.concurrent.FastThreadLocalRunnable.run(FastThreadLocalRunnable.java:30) at java.lang.Thread.run(Thread.java:748)

@hdahl
Copy link

hdahl commented Jul 2, 2020

Sounds like the problem fixed in PR #1766.

@Xpenf
Copy link

Xpenf commented Jul 3, 2020

Sounds like the problem fixed in PR #1766.

Thanks for the reply, but not very understanding, and what is the solution, can you give me an example?

@korektur
Copy link
Contributor

korektur commented Jul 6, 2020

@hdahl @Xpenf I don't think that's the same problem, cause you are not using response body modification filter. The problem in the pull request you linked only ocurs when body is gzip encoded and standard response body modification filter is used.

@Xpenf
Copy link

Xpenf commented Jul 8, 2020

@hdahl @Xpenf I don't think that's the same problem, cause you are not using response body modification filter. The problem in the pull request you linked only ocurs when body is gzip encoded and standard response body modification filter is used.

The "text" I got from the response last modified the response to "content". Isn't this a modification?

@korektur
Copy link
Contributor

korektur commented Jul 8, 2020

@Xpenf its a modification, but It looks like you are not using standard response body modification filter:
https://cloud.spring.io/spring-cloud-static/spring-cloud-gateway/2.1.0.RC3/single/spring-cloud-gateway.html#_modify_response_body_gatewayfilter_factory

And the merge request referenced earlier is fixing a bug within that filter. So that is unlikely its the same bug

@lixiankai99
Copy link

Hi,
I want to report a problem.

Spring Cloud Hoxton.SR8
Spring Boot 2.3.3.RELEASE

Gateway simple application
Monitoring PlatformDependent.class
The system will crash after running for a period of time

Field field  = ReflectionUtils.findField(PlatformDependent.class,"DIRECT_MEMORY_COUNTER");
Field limitField  = ReflectionUtils.findField(PlatformDependent.class,"DIRECT_MEMORY_LIMIT");
try {
    if(null != field){
        limitField.setAccessible(true);
        field.setAccessible(true);
        memoryCounter = (AtomicLong)field.get(PlatformDependent.class);
        memoryLimit = (long)limitField.get(PlatformDependent.class);
    }
} catch (IllegalAccessException e) {
   log.warn(e.getMessage(),e);
}

@scheduled(cron = "*/10 * * * * ?")
public void test(){
log.warn("DIRECT_MEMORY_COUNTER: {}m,total: {}m",memoryCounter.get() / (1024 * 1024),memoryLimit / (1024 * 1024));

}

I monitor direct_ MEMORY_ Counter has been increasing
After a week of system operation

io.netty.util.internal.OutOfDirectMemoryError: failed to allocate 16777216 byte(s) of direct memory (used: 1023410183, max: 1037959168)
at io.netty.util.internal.PlatformDependent.incrementMemoryCounter(PlatformDependent.java:754)
at io.netty.util.internal.PlatformDependent.allocateDirectNoCleaner(PlatformDependent.java:709)
at io.netty.buffer.PoolArena$DirectArena.allocateDirect(PoolArena.java:755)
at io.netty.buffer.PoolArena$DirectArena.newChunk(PoolArena.java:731)
at io.netty.buffer.PoolArena.allocateNormal(PoolArena.java:247)
at io.netty.buffer.PoolArena.allocate(PoolArena.java:215)
at io.netty.buffer.PoolArena.allocate(PoolArena.java:147)
at io.netty.buffer.PooledByteBufAllocator.newDirectBuffer(PooledByteBufAllocator.java:356)
at io.netty.buffer.AbstractByteBufAllocator.directBuffer(AbstractByteBufAllocator.java:187)
at io.netty.buffer.AbstractByteBufAllocator.directBuffer(AbstractByteBufAllocator.java:178)
at io.netty.channel.unix.PreferredDirectByteBufAllocator.ioBuffer(PreferredDirectByteBufAllocator.java:53)
at io.netty.channel.DefaultMaxMessagesRecvByteBufAllocator$MaxMessageHandle.allocate(DefaultMaxMessagesRecvByteBufAllocator.java:114)
at io.netty.channel.epoll.EpollRecvByteAllocatorHandle.allocate(EpollRecvByteAllocatorHandle.java:75)
at io.netty.channel.epoll.AbstractEpollStreamChannel$EpollStreamUnsafe.epollInReady(AbstractEpollStreamChannel.java:777)
at io.netty.channel.epoll.EpollEventLoop.processReady(EpollEventLoop.java:475)
at io.netty.channel.epoll.EpollEventLoop.run(EpollEventLoop.java:378)
at io.netty.util.concurrent.SingleThreadEventExecutor$4.run(SingleThreadEventExecutor.java:989)
at io.netty.util.internal.ThreadExecutorMap$2.run(ThreadExecutorMap.java:74)
at io.netty.util.concurrent.FastThreadLocalRunnable.run(FastThreadLocalRunnable.java:30)
at java.lang.Thread.run(Thread.java:745)

@yangszz
Copy link

yangszz commented Apr 14, 2021

Has it been resolved?

@MrCoderStack
Copy link

Has it been resolved?

my version:
spring-cloud.version: Hoxton.SR9
spring-boot.version: 2.3.7.RELEASE

@LeungKitSam
Copy link

LeungKitSam commented Sep 20, 2022

update:

spring-cloud.version: Hoxton.SR9
spring-boot.version: 2.3.7.RELEASE

I customize a filter to rewrite the request like this:

Class inClass = Object.class;
Mono<DataBuffer> requestMono = ServerWebExchangeUtils.cacheRequestBodyAndRequest(exchange,
(serverHttpRequest) -> ServerRequest
     .create(exchange.mutate().request(serverHttpRequest).build(), messageReaders).bodyToMono(inClass)
     .doOnNext((objectValue) -> {
          exchange.getAttributes().put("postRequestBody", objectValue);
          exchange.getAttributes().put("cachedRequestBodyObject", objectValue);
      }));
return requestMono.then(chain.filter(exchange));

then I test with a POST request but body param is form-data.I can get error with 415 Unsupported Media Type and error below:

2022-09-21 14:29:06,394034000 [reactor-http-epoll-7] ERROR i.n.u.ResourceLeakDetector- LEAK: ByteBuf.release() was not called before it's garbage-collected. See https://netty.io/wiki/reference-counted-objects.html for more information.
Recent access records:
Created at:
io.netty.buffer.SimpleLeakAwareByteBuf.unwrappedDerived(SimpleLeakAwareByteBuf.java:143)
io.netty.buffer.SimpleLeakAwareByteBuf.readRetainedSlice(SimpleLeakAwareByteBuf.java:67)
io.netty.handler.codec.http.HttpObjectDecoder.decode(HttpObjectDecoder.java:336)
io.netty.handler.codec.http.HttpServerCodec$HttpServerRequestDecoder.decode(HttpServerCodec.java:123)
io.netty.handler.codec.ByteToMessageDecoder.decodeRemovalReentryProtection(ByteToMessageDecoder.java:508)
io.netty.handler.codec.ByteToMessageDecoder.callDecode(ByteToMessageDecoder.java:447)
io.netty.handler.codec.ByteToMessageDecoder.channelRead(ByteToMessageDecoder.java:276)
io.netty.channel.CombinedChannelDuplexHandler.channelRead(CombinedChannelDuplexHandler.java:251)
io.netty.channel.AbstractChannelHandlerContext.invokeChannelRead(AbstractChannelHandlerContext.java:379)
io.netty.channel.AbstractChannelHandlerContext.invokeChannelRead(AbstractChannelHandlerContext.java:365)
io.netty.channel.AbstractChannelHandlerContext.fireChannelRead(AbstractChannelHandlerContext.java:357)
io.netty.channel.DefaultChannelPipeline$HeadContext.channelRead(DefaultChannelPipeline.java:1410)
io.netty.channel.AbstractChannelHandlerContext.invokeChannelRead(AbstractChannelHandlerContext.java:379)
io.netty.channel.AbstractChannelHandlerContext.invokeChannelRead(AbstractChannelHandlerContext.java:365)
io.netty.channel.DefaultChannelPipeline.fireChannelRead(DefaultChannelPipeline.java:919)
io.netty.channel.epoll.AbstractEpollStreamChannel$EpollStreamUnsafe.epollInReady(AbstractEpollStreamChannel.java:795)
io.netty.channel.epoll.EpollEventLoop.processReady(EpollEventLoop.java:480)
io.netty.channel.epoll.EpollEventLoop.run(EpollEventLoop.java:378)
io.netty.util.concurrent.SingleThreadEventExecutor$4.run(SingleThreadEventExecutor.java:989)
io.netty.util.internal.ThreadExecutorMap$2.run(ThreadExecutorMap.java:74)
io.netty.util.concurrent.FastThreadLocalRunnable.run(FastThreadLocalRunnable.java:30)
java.base/java.lang.Thread.run(Thread.java:829)

I keep testing with this kind of request,I find that the direct buffers keep increasing till java.lang.OutOfMemoryError: Direct buffer memory.

prometheus JVM monitor dashboard:
image

direct buffer memory error info:

2022-09-20 14:36:33,244115000 [reactor-http-epoll-4] ERROR r.n.t.TcpServer- [id: 0xabf8e6d4, L:/192.168.102.53:8066 - R:/172.20.155.129:54215] onUncaughtException(SimpleConnection{channel=[id: 0xabf8e6d4, L:/192.168.102.53:8066 - R:/172.20.155.129:54215]})
java.lang.OutOfMemoryError: Direct buffer memory
at java.nio.Bits.reserveMemory(Bits.java:175) ~[?:?]
at java.nio.DirectByteBuffer.(DirectByteBuffer.java:118) ~[?:?]
at java.nio.ByteBuffer.allocateDirect(ByteBuffer.java:317) ~[?:?]
at io.netty.buffer.PoolArena$DirectArena.allocateDirect(PoolArena.java:645) ~[netty-buffer-4.1.55.Final.jar!/:4.1.55.Final]
at io.netty.buffer.PoolArena$DirectArena.newChunk(PoolArena.java:621) ~[netty-buffer-4.1.55.Final.jar!/:4.1.55.Final]
at io.netty.buffer.PoolArena.allocateNormal(PoolArena.java:204) ~[netty-buffer-4.1.55.Final.jar!/:4.1.55.Final]
at io.netty.buffer.PoolArena.tcacheAllocateSmall(PoolArena.java:174) ~[netty-buffer-4.1.55.Final.jar!/:4.1.55.Final]
at io.netty.buffer.PoolArena.allocate(PoolArena.java:136) ~[netty-buffer-4.1.55.Final.jar!/:4.1.55.Final]
at io.netty.buffer.PoolArena.allocate(PoolArena.java:128) ~[netty-buffer-4.1.55.Final.jar!/:4.1.55.Final]
at io.netty.buffer.PooledByteBufAllocator.newDirectBuffer(PooledByteBufAllocator.java:378) ~[netty-buffer-4.1.55.Final.jar!/:4.1.55.Final]
at io.netty.buffer.AbstractByteBufAllocator.directBuffer(AbstractByteBufAllocator.java:187) ~[netty-buffer-4.1.55.Final.jar!/:4.1.55.Final]
at io.netty.buffer.AbstractByteBufAllocator.directBuffer(AbstractByteBufAllocator.java:178) ~[netty-buffer-4.1.55.Final.jar!/:4.1.55.Final]
at io.netty.channel.unix.PreferredDirectByteBufAllocator.ioBuffer(PreferredDirectByteBufAllocator.java:53) ~[netty-transport-native-unix-common-4.1.55.Final.jar!/:4.1.55.Final]
at io.netty.channel.DefaultMaxMessagesRecvByteBufAllocator$MaxMessageHandle.allocate(DefaultMaxMessagesRecvByteBufAllocator.java:114) ~[netty-transport-4.1.55.Final.jar!/:4.1.55.Final]
at io.netty.channel.epoll.EpollRecvByteAllocatorHandle.allocate(EpollRecvByteAllocatorHandle.java:75) ~[netty-transport-native-epoll-4.1.55.Final-linux-x86_64.jar!/:4.1.55.Final]
at io.netty.channel.epoll.AbstractEpollStreamChannel$EpollStreamUnsafe.epollInReady(AbstractEpollStreamChannel.java:780) [netty-transport-native-epoll-4.1.55.Final-linux-x86_64.jar!/:4.1.55.Final]
at io.netty.channel.epoll.EpollEventLoop.processReady(EpollEventLoop.java:480) [netty-transport-native-epoll-4.1.55.Final-linux-x86_64.jar!/:4.1.55.Final]
at io.netty.channel.epoll.EpollEventLoop.run(EpollEventLoop.java:378) [netty-transport-native-epoll-4.1.55.Final-linux-x86_64.jar!/:4.1.55.Final]
at io.netty.util.concurrent.SingleThreadEventExecutor$4.run(SingleThreadEventExecutor.java:989) [netty-common-4.1.55.Final.jar!/:4.1.55.Final]
at io.netty.util.internal.ThreadExecutorMap$2.run(ThreadExecutorMap.java:74) [netty-common-4.1.55.Final.jar!/:4.1.55.Final]
at io.netty.util.concurrent.FastThreadLocalRunnable.run(FastThreadLocalRunnable.java:30) [netty-common-4.1.55.Final.jar!/:4.1.55.Final]
at java.lang.Thread.run(Thread.java:829) [?:?]

@LeungKitSam
Copy link

update:

I add some codes about Databuffer as below in my custom global exception handler.The issue seems to be improved.

Object cachedRequestBody = request.exchange().getAttributes().get(ServerWebExchangeUtils.CACHED_REQUEST_BODY_ATTR);
if (cachedRequestBody instanceof DataBuffer) {
  DataBufferUtils.release((DataBuffer)cachedRequestBody);
}

@spencergibb
Copy link
Member

We've added some fixes as well. If this still happens in 4.1.2 (releasing later this month May 2024) we can reopen. If you want to try 4.1.2-SNAPSHOT, go ahead.

Sorry for the late reply.

Sign up for free to join this conversation on GitHub. Already have an account? Sign in to comment
Projects
None yet
Development

No branches or pull requests