Skip to content
New issue

Have a question about this project? Sign up for a free GitHub account to open an issue and contact its maintainers and the community.

By clicking “Sign up for GitHub”, you agree to our terms of service and privacy statement. We’ll occasionally send you account related emails.

Already on GitHub? Sign in to your account

gRPC + netty throw exception and long stack trace when receiving HTTP/1.1 request #7692

Closed
evantorrie opened this issue Dec 4, 2020 · 6 comments

Comments

@evantorrie
Copy link

What version of gRPC-Java are you using?

1.29.0

What is your environment?

Linux RHEL7

What did you expect to see?

Less noisy failure when receiving an HTTP/1.1 GET request

What did you see instead?

Dec 04, 2020 12:16:47 AM io.grpc.netty.shaded.io.grpc.netty.NettyServerTransport notifyTerminated
INFO: Transport failed
io.grpc.netty.shaded.io.netty.handler.codec.http2.Http2Exception: Unexpected HTTP/1.x request: GET /status.html 
	at io.grpc.netty.shaded.io.netty.handler.codec.http2.Http2Exception.connectionError(Http2Exception.java:103)
	at io.grpc.netty.shaded.io.netty.handler.codec.http2.Http2ConnectionHandler$PrefaceDecoder.readClientPrefaceString(Http2ConnectionHandler.java:302)
	at io.grpc.netty.shaded.io.netty.handler.codec.http2.Http2ConnectionHandler$PrefaceDecoder.decode(Http2ConnectionHandler.java:239)
	at io.grpc.netty.shaded.io.netty.handler.codec.http2.Http2ConnectionHandler.decode(Http2ConnectionHandler.java:438)
	at io.grpc.netty.shaded.io.netty.handler.codec.ByteToMessageDecoder.decodeRemovalReentryProtection(ByteToMessageDecoder.java:498)
	at io.grpc.netty.shaded.io.netty.handler.codec.ByteToMessageDecoder.callDecode(ByteToMessageDecoder.java:437)
	at io.grpc.netty.shaded.io.netty.handler.codec.ByteToMessageDecoder.channelRead(ByteToMessageDecoder.java:276)
	at io.grpc.netty.shaded.io.netty.channel.AbstractChannelHandlerContext.invokeChannelRead(AbstractChannelHandlerContext.java:379)
	at io.grpc.netty.shaded.io.netty.channel.AbstractChannelHandlerContext.invokeChannelRead(AbstractChannelHandlerContext.java:365)
	at io.grpc.netty.shaded.io.netty.channel.AbstractChannelHandlerContext.fireChannelRead(AbstractChannelHandlerContext.java:357)
	at io.grpc.netty.shaded.io.netty.channel.DefaultChannelPipeline$HeadContext.channelRead(DefaultChannelPipeline.java:1410)
	at io.grpc.netty.shaded.io.netty.channel.AbstractChannelHandlerContext.invokeChannelRead(AbstractChannelHandlerContext.java:379)
	at io.grpc.netty.shaded.io.netty.channel.AbstractChannelHandlerContext.invokeChannelRead(AbstractChannelHandlerContext.java:365)
	at io.grpc.netty.shaded.io.netty.channel.DefaultChannelPipeline.fireChannelRead(DefaultChannelPipeline.java:919)
	at io.grpc.netty.shaded.io.netty.channel.nio.AbstractNioByteChannel$NioByteUnsafe.read(AbstractNioByteChannel.java:163)
	at io.grpc.netty.shaded.io.netty.channel.nio.NioEventLoop.processSelectedKey(NioEventLoop.java:714)
	at io.grpc.netty.shaded.io.netty.channel.nio.NioEventLoop.processSelectedKeysOptimized(NioEventLoop.java:650)
	at io.grpc.netty.shaded.io.netty.channel.nio.NioEventLoop.processSelectedKeys(NioEventLoop.java:576)
	at io.grpc.netty.shaded.io.netty.channel.nio.NioEventLoop.run(NioEventLoop.java:493)
	at io.grpc.netty.shaded.io.netty.util.concurrent.SingleThreadEventExecutor$4.run(SingleThreadEventExecutor.java:989)
	at io.grpc.netty.shaded.io.netty.util.internal.ThreadExecutorMap$2.run(ThreadExecutorMap.java:74)
	at io.grpc.netty.shaded.io.netty.util.concurrent.FastThreadLocalRunnable.run(FastThreadLocalRunnable.java:30)
	at java.lang.Thread.run(Thread.java:748)

Steps to reproduce the bug

Send a curl http://localhost:<port>/status.html to the port that is serving gRPC.

Note: ideally this would be addressed with #3458 so that we can define what to return for load balancers/VIPs/outside services that like to check liveness of a service using the only way they know how, which is an HTTP/1.1 request.

But in the interim, it would be nice not to have this type of stacktrace appearing in our logs every 5 seconds (interval for the HTTP/1.1 checks).

@ejona86
Copy link
Member

ejona86 commented Dec 7, 2020

This particular error may be useful to keep in the logs, because HTTP/1 doesn't work. The HTTP/1 health checks should be failing. How are the HTTP/1 health checks providing any value? Are you unable to use TCP health checks?

@evantorrie
Copy link
Author

Unfortunately, these services are fronted by hardware VIP load balancers (yes, for strange legacy reasons), and those hardware VIPs are configured to send HTTP/1.1 requests to /status.html to check to see whether the "backends" are up and alive.

You're right that we may be able to ask our sysadmins to configure them to use just TCP health-checks. Let me see.

@ejona86
Copy link
Member

ejona86 commented Jan 7, 2021

@evantorrie, did you discover anything?

@evantorrie
Copy link
Author

@ejona86 Yes, I discovered that our internal system is making HTTP/1.1 requests because it's based on an old version of curl that doesn't support HTTP/2.

I think you can close this as Won't Fix, since internally I'll be upgrading our system to http/2.

@ejona86
Copy link
Member

ejona86 commented Jan 8, 2021

@evantorrie, okay, sounds good. And to be clear, if there was a good need we could totally silence this log statement. It just seems like logging might be doing more good than harm at the moment.

@fangxlmr
Copy link

@evantorrie, okay, sounds good. And to be clear, if there was a good need we could totally silence this log statement. It just seems like logging might be doing more good than harm at the moment.

@ejona86 Do we have any options/configs available to disable this exception log?
I found out that sometimes health checking, or secure port scanning in commpany is common and frequent, and this kind of scaning would cause a huge amount of exception trace which distracting me from debugging other issues.

@github-actions github-actions bot locked as resolved and limited conversation to collaborators Aug 26, 2021
Sign up for free to subscribe to this conversation on GitHub. Already have an account? Sign in.
Labels
None yet
Projects
None yet
Development

No branches or pull requests

3 participants