-
Notifications
You must be signed in to change notification settings - Fork 17.6k
New issue
Have a question about this project? Sign up for a free GitHub account to open an issue and contact its maintainers and the community.
By clicking “Sign up for GitHub”, you agree to our terms of service and privacy statement. We’ll occasionally send you account related emails.
Already on GitHub? Sign in to your account
net/http: do not log error in http.Server for TCP probes #26918
Comments
CC @bradfitz |
Well, you control https://golang.org/pkg/net/http/#Server.ErrorLog so you can filter those out yourself if you'd like. That's at least a fix you could do today, without waiting for a decision and a new Go release and for Kubernetes to pick up said new Go release. I could imagine other people would prefer to see it logged for debugging connection issues, though. It's not obvious that the correct thing is to not log on such errors. |
@bradfitz Yeah that could be a solution but that would turn off the logging for all errors right ? Would it be considerable to only log an error if something has been written on the socket ? |
You can still pass your own |
No, when I said "filter" I meant you could write code to decide which log prints you don't care about. Part of the log.Logger contract (at https://golang.org/pkg/log/#Logger) is:
So it's super easy to filter with a custom io.Writer, as @agnivade said. You don't need to implement any buffering. |
I've been looking at the code of the kubernetes apiserver and I can't find any ErrorLog being set : |
All right I think I found it, they overcharged the default logger via https://github.com/kubernetes/kubernetes/blob/201c11f147c85b029665915bee3a62eea19d6d57/staging/src/k8s.io/apiserver/pkg/util/logs/logs.go#L54-L59 I don't think they'll accept that I filter out this log in their Write() implementation ... |
I'm definitely not excited about running a regexp on every single log line. I haven't done a survey, but I think open/close a TCP port is not an unheard of way to monitor something. On the other hand, if you're not monitoring your service that way, then you probably absolutely want to see this log message somehow, it looks like a DoS attempt. I think what I'd really like in either case way is a metric to monitor. I can imagine setting up an optional channel for the http server to send non-fatal errors down instead of logging, that's a lot of machinery but should satisfy all the use cases. |
I wouldn't be either. Fortunately nobody mentioned a regexp here. :)
IMO, a better way to monitor an HTTP server is to send an HTTP request. Back in the day, the HTTP load balancer I wrote for LiveJournal got its most notable improvements when we started making load balancing decisions based on which backend could reply to an quick dummy Ever since then when I want to see if an HTTP server is up I don't trust that just its TCP port is replying. Who knows what that'll be. Maybe that's just systemd or something else listening for you and handing your (possibly failing to start) program a file descriptor. In fact, Go's HTTP server handles // serverHandler delegates to either the server's Handler or
// DefaultServeMux and also handles "OPTIONS *" requests.
type serverHandler struct {
srv *Server
}
func (sh serverHandler) ServeHTTP(rw ResponseWriter, req *Request) {
handler := sh.srv.Handler
if handler == nil {
handler = DefaultServeMux
}
if req.RequestURI == "*" && req.Method == "OPTIONS" {
handler = globalOptionsHandler{}
}
handler.ServeHTTP(rw, req)
}
Or, if you want the server to close the connection for you and not have to require a Hostname, use HTTP/1.0 (with its implicit
We'd use a func rather than a channel, but I really don't want to add new API if we can help it. And we already have a place we send non-fatal errors: the logger. Because we never call Or what sort of current error today do you consider fatal? |
Given that the log line is not constant, what did you have in mind? strings.Contains? I agree with nearly everything there. A function is fine, too, but I agree that the size of the change seems disproportionate to the need. (Easy for me to say given that it's not my load balancer doing this.)
I had in mind the sort that causes ListenAndServe to return. |
In my case I'm using the Azure load balancer in front of the kubernetes-apiserver and it happens to be a level 4 load balancer which implements HTTP probing but unfortunately not HTTPS that's why I use TCP probing. |
Sure. It's fast. And in the rare chance that we change it, the worse case isn't a break in correctness; it'd just cause a bit of log spam and you could adjust the substring to search for. But if there's enough popular load balancers that only do TCP probes like this, I'd rather just do as originally asked and not log by default in that case rather than adding complicated API around this. At most I could perhaps see us adding a boolean knob. But I don't want to export error strings or types or add log filtering mechanisms in net/http.
ListenAndServe only returns an error if Listen fails or Serve fails. You can Listen first, and then Serve only fails if you shut down the server, or the Listener.Accept fails in a non-temporary way. In practice that means you closed the listener. A bogus HTTP request can't close the Server. So if you don't want to see log spam about bogus HTTP requests, you can set your logger to Then you can just log the result of {ListenAnd,}Serve. |
Hi, in my case a solved it by "custom" logger which filter EOF messages: func httpLogger() *log.Logger {
buf := new(bytes.Buffer)
logger := log.New(buf, "httpLogger_", 0)
go func() {
for {
if buf.Len() > 0 {
line, err := buf.ReadString('\n')
if err != nil {
log.Println("[httpLogger] ", err)
continue
}
if !strings.Contains(line, "EOF") {
log.Print(line)
}
}
}
}()
return logger
} And set it to http.Server s := &http.Server{
Addr: ":https",
TLSConfig: m.TLSConfig(),
Handler: r,
ErrorLog: httpLogger(),
}
go s.ListenAndServeTLS("", "") |
@mmalcek this logic would be more effective: func newHttpLogger() *log.Logger {
pr, pw := io.Pipe()
logger := log.New(pw, "http: ", log.LstdFlags|log.Lmsgprefix|log.Lmicroseconds)
scanner := bufio.NewScanner(pr)
go func() {
for scanner.Scan() {
go func(line string) {
if !strings.HasSuffix(line, ": EOF") {
fmt.Fprintln(os.Stdout, line)
}
}(scanner.Text())
}
fmt.Fprintln(os.Stdout, "log scanner stopped:", scanner.Err())
}()
return logger
} |
See * golang/go#26918 * kubernetes/kubernetes#91277 Signed-off-by: Alexander Yastrebov <alexander.yastrebov@zalando.de>
See * golang/go#26918 * kubernetes/kubernetes#91277 Signed-off-by: Alexander Yastrebov <alexander.yastrebov@zalando.de>
@kayrus I hope you are joking about effectiveness :) As per https://pkg.go.dev/log#Logger
the solution could look like: type serverErrorLogWriter struct{}
func (*serverErrorLogWriter) Write(p []byte) (int, error) {
m := string(p)
// https://github.com/golang/go/issues/26918
if strings.HasPrefix(m, "http: TLS handshake error") && strings.HasSuffix(m, ": EOF\n") {
// handle EOF error
} else {
// handle other errors
}
return len(p), nil
}
func newServerErrorLog() *log.Logger {
return log.New(&serverErrorLogWriter{}, "", 0)
} |
See * golang/go#26918 * kubernetes/kubernetes#91277 Signed-off-by: Alexander Yastrebov <alexander.yastrebov@zalando.de>
What is the problem that causes the handshake to fail, instead of thinking about how to hide the log |
Anyway, I think I'm going to close that. I'm switched jobs, I am not using Azure anymore and maybe they have implemented HTTPS probing on their load balancers. |
What version of Go are you using ?
1.10.x
Does this issue reproduce with the latest release?
Yes
What operating system and processor architecture are you using?
N/A
What did you do?
Kubernetes apiserver is based on http.Server with a TLS config. If you put a Loadbalancer in front of it which implements basic TCP probing for the load balancing you end up being flooded by the following message:
I0810 08:34:34.483565 1 logs.go:49] http: TLS handshake error from 168.63.129.16:64551: EOF
Which originates from:
go/src/net/http/server.go
Lines 1762 to 1765 in d3c3aaa
What did you expect to see?
I was hoping we could avoid logging an error for opened and closed tcp sockets.
The text was updated successfully, but these errors were encountered: