-
Notifications
You must be signed in to change notification settings - Fork 3.8k
New issue
Have a question about this project? Sign up for a free GitHub account to open an issue and contact its maintainers and the community.
By clicking “Sign up for GitHub”, you agree to our terms of service and privacy statement. We’ll occasionally send you account related emails.
Already on GitHub? Sign in to your account
gRPC + netty throw exception and long stack trace when receiving HTTP/1.1 request #7692
Comments
This particular error may be useful to keep in the logs, because HTTP/1 doesn't work. The HTTP/1 health checks should be failing. How are the HTTP/1 health checks providing any value? Are you unable to use TCP health checks? |
Unfortunately, these services are fronted by hardware VIP load balancers (yes, for strange legacy reasons), and those hardware VIPs are configured to send HTTP/1.1 requests to You're right that we may be able to ask our sysadmins to configure them to use just TCP health-checks. Let me see. |
@evantorrie, did you discover anything? |
@ejona86 Yes, I discovered that our internal system is making HTTP/1.1 requests because it's based on an old version of curl that doesn't support HTTP/2. I think you can close this as Won't Fix, since internally I'll be upgrading our system to http/2. |
@evantorrie, okay, sounds good. And to be clear, if there was a good need we could totally silence this log statement. It just seems like logging might be doing more good than harm at the moment. |
@ejona86 Do we have any options/configs available to disable this exception log? |
What version of gRPC-Java are you using?
1.29.0
What is your environment?
Linux RHEL7
What did you expect to see?
Less noisy failure when receiving an HTTP/1.1 GET request
What did you see instead?
Steps to reproduce the bug
Send a
curl http://localhost:<port>/status.html
to the port that is serving gRPC.Note: ideally this would be addressed with #3458 so that we can define what to return for load balancers/VIPs/outside services that like to check liveness of a service using the only way they know how, which is an HTTP/1.1 request.
But in the interim, it would be nice not to have this type of stacktrace appearing in our logs every 5 seconds (interval for the HTTP/1.1 checks).
The text was updated successfully, but these errors were encountered: