Join GitHub today
GitHub is home to over 36 million developers working together to host and review code, manage projects, and build software together.Sign up
add coveralls support to travis config #695
based on instructions at
@shurcooL could you make sure this doesn't look like it'll interfere with the recent changes you made to the travis config?
could our test coverage really have dropped from 77% to 43? (unfortunately I deleted the old coveralls project and created a new one so we lost history, so we can't easily inspect that
I have some philosophical thoughts/suggestions on this topic (today's the day I get these out, it seems) that I'd like to share @willnorris. Afterwards, I'll leave a review on the current PR.
First, a question. Are you familiar with https://gocover.io/? It's a service that can produce coverage reports for Go packages. It does so by pulling the package and running a test with -cover flag in a container, so it's easier to use (no need to install a command in .travis.yml). E.g., just visit:
However, as I understand, one downside is that it's not possible to have a badge with coverage % in the README, people would have to look at the coverage report to find out what it is. What do you think of it?
Second, you said:
I wanted to ask why did you want to exclude generated code from the coverage report? What value does that provide?
Is it because a coverage report is more valuable/accurate if it reports the percentage of manually-written (as opposed to generated) code? I'm not very familiar with coverage and the goals involved, so I wanted to ask why that's the case.
An alternative would be to generate test cases for the generated code. Would that be better or worse?
You've introduced a build tag in order to take the generated file out of consideration in the coverage report. In general, build tags add some amount of complexity to a Go package. Ideally, a Go package is simplest when it uses no build constraints. I think it's best to restrict their usage only for situations when they are unavoidable. At this point, I'm not sure if artificially increasing the coverage report percentage justifies adding a build tag.
It works well now, because the generated file is purely additive, but it might cause problems in the future if that file can no longer be excluded and have the package still build successfully. If that situation comes up, would we take the build tag out? Or would we compromise the solution to work around the existence of the build tag?
It's probably fine, but I wanted to point this out.
Third, an observation.
If the goal of a high quality coverage report to exclude generated code, then I think the long term better solution is to have the coverage report tool take on that responsibility. Now that we have a standard for detecting if a .go file is generated (see https://golang.org/s/generatedcode and https://godoc.org/github.com/shurcooL/go/generated), the coverage report tool can detect that and report numbers without taking generated file coverage into account. It'd be trivial to report both numbers too.
This would be a better long term solution because then it only has to be done once, in the coverage tool, rather than inside every Go project that includes generated Go code and wants to exclude it in the coverage report.
Of course, it's also more work in the short term, because this doesn't appear to be implemented yet.
I'm not. That's pretty nice, though as you note, it doesn't have a badge which is a little unfortunate. An even bigger issue than that is that it doesn't allow us to see changes in test coverage, for example as a result of a pull request. The comments from coveralls might end up being too annoying to be useful (at which point we can turn them off), but hopefully makes it easier to see when some test coverage is missing.
Given the kind of generated code we have, adding test coverage for them doesn't provide as much value. We certainly can, and I'm not opposed to it, but as you note they would need to be generated as well. At which point, you're again relying on generated code, so how do you know that's working? Tests for the tests? :)
100% test coverage should never be a blind goal. I treat coverage much like lint warnings. The goal shouldn't be to always eliminate all lint warnings, but rather it's one more tool in the toolbox.
So the point of eliminating generated code from the coverage numbers isn't to artificially inflate them, but to give a more accurate picture of what we are (and are not) aiming for.
Sure, that would be fine. I guess another option could be to simply delete the generated file before running the coverage report. That would achieve the same purpose of what I have in this PR without needing a build constraint. What do you think of that?
That's a good point, I forgot about that aspect. I agree, that makes it worthwhile to go through the trouble of making the changes to .travis.yml to go get the coveralls utility and all that.
Agreed. That's a healthy goal, I support that. Thanks for providing the rationale.
I like that a lot actually. Primarily, because it achieves the goal of giving us the desired outcome regarding coverage reports in an equivalent way as the build tag, but does not introduce the mental complexity and potentially unintended maintenance/backwards incompatibility of introducing a build tag (something that's a lot easier to add, than to remove in the future).
By using that approach, we don't commit to supporting anything we don't need to, and it will be easy to make changes if a better solution becomes available in the future.
It suffers from the same limitation as the build tag, namely, it will work as long as the generated file is purely additive and the package builds successfully without it. But I'm more okay with that given how we could make changes easily that don't affect users.
I've done more work to finish off the
I use coveralls for the same reason, to monitor test coverage. I find it very useful to disable the comments and just use the status API to review the coverage changes, such as bradleyfalzon/gopherci#116
Yeah, and I set mine quite low, as some changes will naturally lower coverage - and I don't consider that worthy of marking a build as broken, so I'll just look over the status API manually.