-
Notifications
You must be signed in to change notification settings - Fork 17.9k
New issue
Have a question about this project? Sign up for a free GitHub account to open an issue and contact its maintainers and the community.
By clicking “Sign up for GitHub”, you agree to our terms of service and privacy statement. We’ll occasionally send you account related emails.
Already on GitHub? Sign in to your account
proposal: dynamic ignored error detector #40776
Comments
Interesting. When I open a file for reading, I am ignoring the error returned by Close, and it's okay to do so. But when I open a file for writing, it is important to check the error from Close, because it could report an I/O error that a previous call to write did not. The error comes directly from the operating system. In both cases, the error comes from (*os.File).Close. Could this proposal account for this somehow? |
@acln0 That is a good point. I had an earlier more complex idea that handled that case, but I'm not sure that this one does. The tool could special case the |
Given that a lot of existing code does not distinguish between ignoring errors by assigning to Even then, I don't think this proposal is in line with the Go 2 Transitions document: in order for tests to pass successfully, we would be de facto redefining the meaning of failing to bind a variable of type |
Moreover, unlike some of the static-checking proposals, this proposal would not catch accidentally-ignored return values of types other than (For example, it would not catch ignored return-values from functional-style methods such as |
@bcmills I'm not proposing adding something to the language or to go vet. I'm proposing a separate tool that does a dynamic analysis akin to the race detector. It would be invoked separately, not as part of a normal build. There are no transition or language version issues here. I agree that this tool, if it is possible at all, does not solve all problems. |
As far as I can see, either the tool will be adopted as authoritative (like the race detector) — in which case it is de facto a language change (if not formally one) — or it will not be adopted as authoritative and package authors will not consistently accept patches to fix violations. If the latter, I don't think the dynamic analyzer would be worth its cost: its rate of false-positives would remain high, and I suspect that too many users would tend to dismiss its reports as false-positives rather than investigating and fixing them. Dynamic analyses are most effective when they have no false-positives. |
I agree that the tool must have no false positives. @acln0 pointed out a potential false positive, and I agree that that case must be fixed before we could possibly create this tool. Did you have other false positives in mind? You mentioned the distinction between assigning to |
I'm concerned about both false-positives (intentionally-ignored errors that are flagged by the analyzer) and false-negatives (unintentionally-ignored errors that are not flagged), but I'm much more concerned about the false positives because they will encourage users to ignore or discount the analyzer. In addition to the known If this were actually a language change, we would say that all of the intentionally-ignored non-nil errors must be updated with some kind of explicit marking as of some particular Go version, so that existing code that lacks such markings would produce no false-positives and would not need to be updated. However, as a dynamic analysis without a corresponding language change, we would have no way to distinguish these kinds of intentional-but-unmarked call sites from accidentally-unmarked sites in new code. |
I'm not entirely convinced of the specific mechanic, but the underlying idea of detecting such things has appeal. Another possible approach might be something annotation-ish. A given function which returns an error can usually be identified as one of three types: (1) May return an error you should not ignore. So, for instance, fmt.Fprintf has the error-ignorability of the target's Write(), which means that in some cases, you can statically determine that no error is possible. I suppose one strategy might be a function annotation like But thinking about it: At that point, perhaps it would also be useful to consider doing this to non-errors. I think some functions may return a value which it's necessarily a bad idea to ignore, because there's a necessary cleanup which can't be done without the return value. So possibly what's wanted is a per-return annotation for "it is an error not to use this value"? Sorta wild ideas here, but maybe they'll be useful. |
Needs more thought. |
In Go many functions return an error result. It is easy for the caller to ignore these results. This has led to various proposals to check that all errors are handled (e.g., #20803, #27845).
One problem with these proposals is that it is always OK to ignore some errors. For example,
bytes.(*Buffer).Write
never returns a non-nil
error result. The only error returned byfmt.Fprint
is from a call toWrite
. Therefore, usingfmt.Fprint
with abytes.Buffer
can never return a non-nil
error. A proposal that requires that code always check or explicitly ignore the error result offmt.Fprint
would be inappropriate when used with abytes.Buffer
, even if it might be appropriate when used with aos.FIle
.A slightly more subtle example is
bufio.Writer
. ItsWrite
method can return a real error result. By design, though, that error result is not only returned, it is also cached. Any use ofbufio.Writer
must eventually call theFlush
method. TheFlush
method will return any cached error. So it is safe to ignore all errors frombufio.(*Writer).Write
as long as the program checks the error returned byFlush
.To put it another way, in Go some errors can be safely ignored, but whether that is true at a particular call site is not a property that can be statically determined. Requiring that all errors be handled would not only break many existing Go programs, it would introduce additional code that would never be necessary and in some cases would never be executed.
Since static detection is difficult, I want to raise the possibility of a dynamic approach. I don't know whether this is really a good idea, but it seems that it might be an interesting design space to explore.
The ignored error detector is a rewriting approach, like the race detector. It could be implemented directly in the compiler, or it could be implemented as a source-to-source rewriting tool invoked by the go tool.
We introduce a new type in some internal package with an
Ignored
method:For any function or method that returns an error result, we check where the returned errors come from. If an error is returned from a call to
errors.New
orfmt.Errorf
, and that error is not stored anywhere other than a local variable before being returned, then we change the return to returnMakeIgnored(err)
. If an error is returned unmodified from some other function call, we don't change the code; we just return the unmodified error. If the function returns an explicitnil
for the error result, we change it to returnMagicNilError
.For any comparison of a value of type
error
withnil
, we rewriteerr == nil
toMakeNotIgnored(err) == nil
(and similarly for!=
, of course).For any call to a function or method that returns an error, if that error is not stored anywhere--not stored in a local variable, not assigned to
_
, but simply ignored entirely--we change the code toI'm sure I'm missing some cases, but I hope that the general idea is clear.
The effect of this will be that for any call to a function or method that can in some cases return a real error, if that error is ignored entirely, the program will panic. In conjunction with a testsuite with good coverage, the ignored error detector should find most of the cases where a program is ignoring an error that might actually be serious.
The text was updated successfully, but these errors were encountered: