New issue
Have a question about this project? Sign up for a free GitHub account to open an issue and contact its maintainers and the community.
By clicking “Sign up for GitHub”, you agree to our terms of service and privacy statement. We’ll occasionally send you account related emails.
Already on GitHub? Sign in to your account
Unwind constant error string pattern #4514
Comments
This commit demonstrates the pattern proposed in crossplane#4514. Rather than testing for a specific error string (and type), we test only that the returned error errors.Is the errBoom sentinel error. By doing so we continue to test that we're returning the specific error we want - not just any non-nil error. This works because we inject a dependency that returns the errBoom sentinel error. Any other error would be unlikely to be errors.Is(err, errBoom), and would thus result in a failed test. Signed-off-by: Nic Cope <nicc@rk0n.org>
I've opened #4515 to demonstrate option 3. If we do proceed with this, we'll need to figure out what to do with https://pkg.go.dev/github.com/crossplane/crossplane-runtime@v1.13.0/pkg/test#EquateErrors. We recently promoted crossplane-runtime to v1.x, so this function is part of our stable API. Probably the safest thing to do would be to add a |
Thanks @negz! As you know, I totally agree with this! I think using |
See crossplane/crossplane#4514 for context. We can't remove this (it would be a breaking API change) but I want to discourage and redirect folks. Signed-off-by: Nic Cope <nicc@rk0n.org>
@phisco I thought so too, but I'm less confident after starting to work on updating the unit tests in crossplane/crossplane-runtime#529. So far I've updated 2-3 small packages and found myself reaching for I'm seeing two situations where I'm using
We could address the first situation by agreeing to define our own sentinel errors instead of using var ErrInvalidCACertificate = errors.New("invalid CA certificate") However this approach would suffer from some of the same problems we have today. I also don't think it's compatible with In the second situation I find myself wanting something like I think I could be convinced to just relax and learn to love |
It does seem like the "just check that you got any error" approach is considered fairly idiomatic in Go. e.g.: https://github.com/golang/go/wiki/TestComments#test-error-semantics
https://bitfieldconsulting.com/golang/comparing-errors
|
Just my $0.02 that checking for any error in a testing context feels possibly insufficient, because that wouldn't give us confidence that we actually hit and tested the code path we intended to. But that's just a high level reaction without looking into the nuances here with any depth, so feel free to disregard 🤓 |
Another slight pain - we have some tests that use Related: crossplane/crossplane-runtime#198 |
I ended up spending a bunch of time today updating crossplane/crossplane-runtime#529, which seemed to be a good way to test this proposed pattern. I have to be honest - I'm pretty ambivalent about this. It feels more like we're trading one set of shortcomings for another to me, and less like a clear improvement to our codebase. |
Yes, indeed, it's pretty common to just check if the returned error is nil or not. But I see your point, and I guess it's probably related to the style of our codebase. That approach obviously works with a coding style leaning more toward simpler functions with fewer possible error cases each. However, it's obviously not going to be enough if you have multiple meaningful error conditions you want to test out, which is usually the case for our codebase. I don't think we should do a switch of the whole codebase to a single new approach, at least not in one go, but rather relax our constraints and let each one of us pick the best approach for their specific case, trying to stick to existing patterns if there are no valid reasons not to do so, just to avoid complete chaos. At the end of the day, the testing style depends on the way each one of us writes code and on the topic at hand, so there is no real value into forcing someone into a testing style that doesn't fit with the way they write code or the topic they are dealing with, it could even be detrimental. My thinking usually is that 100% coverage should not be the goal, tests should not be a negative copy of the exact implementation at the time of writing, as that would mean having brittle tests and doubling the amount of code I have to change for each actual change I have to do; instead, we should aim at having meaningful test cases that cover all the useful behaviours of our code. Which is obviously a fuzzy definition that has to be reevaluated every time, but if that wasn't the case we wouldn't have so many books and discussions about TDD and testing in general. @negz regarding:
If you don't care about the exact error message you can ignore fields by using |
I agree we shouldn't be militantly prescriptive, but I do feel there's a lot of value in writing tests (or generally any code) in a consistent way. It helps with things like refactoring, and jumping in to a part of the codebase you don't work on often. It's hard to completely avoid a particular engineer's "fingerprints", but you don't want to open the package manager tests and find that they're "Hasan style" while the Composition tests are "Nic style" etc. To help keep consistent, my preference would be to make the change all at once. Or at least try to do so as much as we reasonably can without wasting a lot of time. When say 80%+ of the codebase is written consistently it's more clear to newer contributors what patterns they're expected to follow.
Agreed. I think we're at 70% coverage on c/c at the moment - https://app.codecov.io/gh/crossplane/crossplane. That feels about right to me, though I'd like to see it trend up a little more rather than down.
That's true, good point. I'd argue that if the content of an error we return (for human consumption) is deemed unimportant by our tests, then it should stand that the content of that error in a status condition should be unimportant too. I'm not sure it really is unimportant, but to be fair all we were testing previously was that the right constant was being used (not that the constant had useful information). Perhaps we could update our existing |
While I still have reservations about this change I'm inclined to proceed with it since:
|
Fixes crossplane#4514 See that issue for discussion and context. Signed-off-by: Nic Cope <nicc@rk0n.org>
I've opened #4515 to encode this in the contributing guide. If/when that's merged I'll take a pass at updating c/c. I have a hacky program that finds an inlines most error constants - https://gist.github.com/negz/1570d976ff3a408de5c4c57daef62606. |
Another (less common) reason for using constant error strings is returning the same error message from multiple places, e.g. I could be convinced that we should differentiate those errors with more context, though :) |
Yes, I'd count that more as a bug than a feature 😉 |
Crossplane does not currently have enough maintainers to address every issue and pull request. This issue has been automatically marked as |
Fixes crossplane#4514 See that issue for discussion and context. Signed-off-by: Nic Cope <nicc@rk0n.org>
Fixes crossplane#4514 See that issue for discussion and context. Signed-off-by: Nic Cope <nicc@rk0n.org>
Fixes crossplane#4514 See that issue for discussion and context. Signed-off-by: Nic Cope <nicc@rk0n.org>
Fixes crossplane#4514 See that issue for discussion and context. Signed-off-by: Nic Cope <nicc@rk0n.org>
What problem are you facing?
Long ago we adopted a convention of defining all of our error strings as constants at the top of the source file where they're used. For example:
This approach has a couple of issues:
Our linter doesn't seem to notice unused error string constants, so we end up with unused ones.(Actually, it does)errors.Errorf
orerrors.Wrapf
because the format string is defined far away from the format arguments.I believe we adopted this pattern to make it easier to (unit) test that we return the expected error in exceptional cases. We do this using https://pkg.go.dev/github.com/crossplane/crossplane-runtime@v1.13.0/pkg/test#EquateErrors, which compares two errors by ensuring they are of the same type, and produce the same error string.
How could Crossplane help solve your problem?
We could agree to unwind this constant error pattern, and just write error strings "inline" wherever we wrap or produce new errors. It may be possible to do a sweep through the codebase and inline all the error constants using find and replace. We should probably also record this decision in our contributing guide.
If we do this, I'd like to understand how we'll compare errors in our unit tests. I can think of a couple of options:
errors.Is
My preference is option 3.
In my experience testing error strings has been less fragile than you might expect - we're typically only testing the "wrapper" string, which we control. I don't think it's very important to test though. What I do think is important is to test that we return the right (i.e. expected) error. I don't think testing for only a non-nil error is enough.
Often we'll be testing a pretty branchy bit of code that could return errors in several different places. We tend to test error paths by injecting dependencies that will return an error when called. When you're only testing for a non-nil error it's easy to write a unit test that appears to test the error case you intend it to, but is actually never reaching that error case because it's hitting another one.
I believe using
errors.Is
should allow us to ensure we're returning the error we want without needing to test on string. We'd do this by injecting a dependency that returned a particular sentinel error (defined only in the unit test function), then test that the returned error wrapped that sentinel error.The text was updated successfully, but these errors were encountered: