Skip to content
New issue

Have a question about this project? Sign up for a free GitHub account to open an issue and contact its maintainers and the community.

By clicking “Sign up for GitHub”, you agree to our terms of service and privacy statement. We’ll occasionally send you account related emails.

Already on GitHub? Sign in to your account

add coveralls support to travis config #695

Merged
merged 1 commit into from Aug 22, 2017

Conversation

Projects
None yet
4 participants
@willnorris
Copy link
Member

commented Aug 15, 2017

based on instructions at
https://coveralls.zendesk.com/hc/en-us/articles/201342809-Go

Fixes #694

@shurcooL could you make sure this doesn't look like it'll interfere with the recent changes you made to the travis config?

@willnorris willnorris requested a review from dmitshur Aug 15, 2017

@willnorris willnorris force-pushed the coveralls branch 3 times, most recently from 0edffa0 to 90d4157 Aug 15, 2017

@coveralls

This comment has been minimized.

Copy link

commented Aug 15, 2017

Coverage Status

Changes Unknown when pulling 90d4157 on coveralls into ** on master**.

@willnorris

This comment has been minimized.

Copy link
Member Author

commented Aug 15, 2017

could our test coverage really have dropped from 77% to 43? (unfortunately I deleted the old coveralls project and created a new one so we lost history, so we can't easily inspect that 😢) I guess it's all the autogenerated code that's pulling us down.

@willnorris willnorris force-pushed the coveralls branch from 90d4157 to 3b47f8f Aug 15, 2017

@willnorris

This comment has been minimized.

Copy link
Member Author

commented Aug 15, 2017

yep, it was the github-accessors.go that was pulling our coverage numbers down. I've now excluded them from the coverage report with a build constraint.

@google google deleted a comment from coveralls Aug 15, 2017

@coveralls

This comment has been minimized.

Copy link

commented Aug 15, 2017

Coverage Status

Changes Unknown when pulling 3b47f8f on coveralls into ** on master**.

@dmitshur

This comment has been minimized.

Copy link
Member

commented Aug 18, 2017

I have some philosophical thoughts/suggestions on this topic (today's the day I get these out, it seems) that I'd like to share @willnorris. Afterwards, I'll leave a review on the current PR.

First, a question. Are you familiar with https://gocover.io/? It's a service that can produce coverage reports for Go packages. It does so by pulling the package and running a test with -cover flag in a container, so it's easier to use (no need to install a command in .travis.yml). E.g., just visit:

https://gocover.io/github.com/google/go-github/github

However, as I understand, one downside is that it's not possible to have a badge with coverage % in the README, people would have to look at the coverage report to find out what it is. What do you think of it?

Second, you said:

yep, it was the github-accessors.go that was pulling our coverage numbers down. I've now excluded them from the coverage report with a build constraint.

I wanted to ask why did you want to exclude generated code from the coverage report? What value does that provide?

Is it because a coverage report is more valuable/accurate if it reports the percentage of manually-written (as opposed to generated) code? I'm not very familiar with coverage and the goals involved, so I wanted to ask why that's the case.

An alternative would be to generate test cases for the generated code. Would that be better or worse?

You've introduced a build tag in order to take the generated file out of consideration in the coverage report. In general, build tags add some amount of complexity to a Go package. Ideally, a Go package is simplest when it uses no build constraints. I think it's best to restrict their usage only for situations when they are unavoidable. At this point, I'm not sure if artificially increasing the coverage report percentage justifies adding a build tag.

It works well now, because the generated file is purely additive, but it might cause problems in the future if that file can no longer be excluded and have the package still build successfully. If that situation comes up, would we take the build tag out? Or would we compromise the solution to work around the existence of the build tag?

It's probably fine, but I wanted to point this out.

Third, an observation.

If the goal of a high quality coverage report to exclude generated code, then I think the long term better solution is to have the coverage report tool take on that responsibility. Now that we have a standard for detecting if a .go file is generated (see https://golang.org/s/generatedcode and https://godoc.org/github.com/shurcooL/go/generated), the coverage report tool can detect that and report numbers without taking generated file coverage into account. It'd be trivial to report both numbers too.

This would be a better long term solution because then it only has to be done once, in the coverage tool, rather than inside every Go project that includes generated Go code and wants to exclude it in the coverage report.

Of course, it's also more work in the short term, because this doesn't appear to be implemented yet.

@dmitshur
Copy link
Member

left a comment

I'm not spotting any issues in .travis.yml after looking over it.

LGTM.

That said, I wish we weren't adding a build tag just for coverage needs, but it's survivable.

@@ -5,6 +5,8 @@

// Code generated by gen-accessors; DO NOT EDIT.

// +build !coverage

This comment has been minimized.

Copy link
@dmitshur

dmitshur Aug 18, 2017

Member

I hope it's going to be clear to people that they shouldn't rely on this build tag, that it's for internal coverage use only and we may change it in the future. It'd be a shame if we can't get rid of it later without breaking user programs.

@willnorris

This comment has been minimized.

Copy link
Member Author

commented Aug 22, 2017

First, a question. Are you familiar with https://gocover.io/?

I'm not. That's pretty nice, though as you note, it doesn't have a badge which is a little unfortunate. An even bigger issue than that is that it doesn't allow us to see changes in test coverage, for example as a result of a pull request. The comments from coveralls might end up being too annoying to be useful (at which point we can turn them off), but hopefully makes it easier to see when some test coverage is missing.

why did you want to exclude generated code from the coverage report?

Given the kind of generated code we have, adding test coverage for them doesn't provide as much value. We certainly can, and I'm not opposed to it, but as you note they would need to be generated as well. At which point, you're again relying on generated code, so how do you know that's working? Tests for the tests? :)

100% test coverage should never be a blind goal. I treat coverage much like lint warnings. The goal shouldn't be to always eliminate all lint warnings, but rather it's one more tool in the toolbox.

So the point of eliminating generated code from the coverage numbers isn't to artificially inflate them, but to give a more accurate picture of what we are (and are not) aiming for.

If the goal of a high quality coverage report to exclude generated code, then I think the long term better solution is to have the coverage report tool take on that responsibility.

Sure, that would be fine. I guess another option could be to simply delete the generated file before running the coverage report. That would achieve the same purpose of what I have in this PR without needing a build constraint. What do you think of that?

@dmitshur

This comment has been minimized.

Copy link
Member

commented Aug 22, 2017

I'm not. That's pretty nice, though as you note, it doesn't have a badge which is a little unfortunate.

An even bigger issue than that is that it doesn't allow us to see changes in test coverage, for example as a result of a pull request. The comments from coveralls might end up being too annoying to be useful (at which point we can turn them off), but hopefully makes it easier to see when some test coverage is missing.

That's a good point, I forgot about that aspect. I agree, that makes it worthwhile to go through the trouble of making the changes to .travis.yml to go get the coveralls utility and all that.

100% test coverage should never be a blind goal. I treat coverage much like lint warnings. The goal shouldn't be to always eliminate all lint warnings, but rather it's one more tool in the toolbox.

So the point of eliminating generated code from the coverage numbers isn't to artificially inflate them, but to give a more accurate picture of what we are (and are not) aiming for.

Agreed. That's a healthy goal, I support that. Thanks for providing the rationale. 👍

Sure, that would be fine. I guess another option could be to simply delete the generated file before running the coverage report. That would achieve the same purpose of what I have in this PR without needing a build constraint. What do you think of that?

I like that a lot actually. Primarily, because it achieves the goal of giving us the desired outcome regarding coverage reports in an equivalent way as the build tag, but does not introduce the mental complexity and potentially unintended maintenance/backwards incompatibility of introducing a build tag (something that's a lot easier to add, than to remove in the future).

By using that approach, we don't commit to supporting anything we don't need to, and it will be easy to make changes if a better solution becomes available in the future. 🎉

It suffers from the same limitation as the build tag, namely, it will work as long as the generated file is purely additive and the package builds successfully without it. But I'm more okay with that given how we could make changes easily that don't affect users.

I've done more work to finish off the // Code generated ... DO NOT EDIT. comment parser recently, and I'll try to pitch the idea to coverage tools to be able to report coverage numbers both including/excluding generated code.

@willnorris willnorris force-pushed the coveralls branch from 3b47f8f to b781c15 Aug 22, 2017

@coveralls

This comment has been minimized.

Copy link

commented Aug 22, 2017

Coverage Status

Changes Unknown when pulling b781c15 on coveralls into ** on master**.

@willnorris willnorris force-pushed the coveralls branch 2 times, most recently from e199413 to 78e7f81 Aug 22, 2017

@coveralls

This comment has been minimized.

Copy link

commented Aug 22, 2017

Coverage Status

Changes Unknown when pulling 78e7f81 on coveralls into ** on master**.

@google google deleted a comment from coveralls Aug 22, 2017

add coveralls support to travis config
based on instructions at
https://coveralls.zendesk.com/hc/en-us/articles/201342809-Go

also remove drone.io badge from README.md, since it looks like the
service is no more.

Fixes #694

@willnorris willnorris force-pushed the coveralls branch from 78e7f81 to 89e28f7 Aug 22, 2017

@willnorris willnorris merged commit 89e28f7 into master Aug 22, 2017

1 of 3 checks passed

continuous-integration/travis-ci/pr The Travis CI build is in progress
Details
continuous-integration/travis-ci/push The Travis CI build is in progress
Details
cla/google All necessary CLAs are signed
@coveralls

This comment has been minimized.

Copy link

commented Aug 22, 2017

Coverage Status

Changes Unknown when pulling 89e28f7 on coveralls into ** on master**.

@willnorris willnorris deleted the coveralls branch Aug 22, 2017

@bradleyfalzon

This comment has been minimized.

Copy link
Contributor

commented Aug 22, 2017

The comments from coveralls might end up being too annoying to be useful (at which point we can turn them off), but hopefully makes it easier to see when some test coverage is missing.

I use coveralls for the same reason, to monitor test coverage. I find it very useful to disable the comments and just use the status API to review the coverage changes, such as bradleyfalzon/gopherci#116

@willnorris

This comment has been minimized.

Copy link
Member Author

commented Aug 22, 2017

@bradleyfalzon do you know what constitutes a "failed" commit status from coveralls? Is it anytime the coverage goes down at all? Only if it goes below a certain threshold? Or does it just always set a "success" status with a note about what changed?

@willnorris

This comment has been minimized.

Copy link
Member Author

commented Aug 22, 2017

oh, never mind. I just went a looked at the coveralls settings page, and it looks like you can set the failure thresholds yourself 😄

@bradleyfalzon

This comment has been minimized.

Copy link
Contributor

commented Aug 22, 2017

oh, never mind. I just went a looks at the coveralls settings page, and it looks like you can set the failure thresholds yourself :)

Yeah, and I set mine quite low, as some changes will naturally lower coverage - and I don't consider that worthy of marking a build as broken, so I'll just look over the status API manually.

Sign up for free to join this conversation on GitHub. Already have an account? Sign in to comment
You can’t perform that action at this time.