coveralls support #1082

Merged
merged 16 commits into from Sep 13, 2013

Projects

None yet

4 participants

@yarikoptic
Contributor

Since the question of coverage was raised I have decided to quickly add a feature... it ended up a sprint to convince travis and coverage to work as desired...

in the end -- I think it should work: you can see what it results in e.g. https://github.com/fail2ban/fail2ban where similar to travis status icon is embedded to README and also reports are associated with PRs where tests didn't fail either they increase or decrease coverage

unfortunately there is some glitch with coveralls and my clone at https://coveralls.io/r/yarikoptic/pystatsmodels -- so it reports coverage but while trying to browse files just says Loading... for me (see lemurheavy/coveralls-public#136) but otherwise -- should work.

This is a full protocol of struggle (+ I also reverted tests to be ran without sudo -- that shouldn't be needed, right? + some trailing spaces were removed -- I just needed smth to be pushed ;) )

yarikoptic added some commits Sep 12, 2013
@yarikoptic yarikoptic ENH: Initial changes to enable coveralls.io support a69f827
@yarikoptic yarikoptic DOC: very minor (trailing spaces) to trigger travis build a8ffc53
@yarikoptic yarikoptic BF: coveralls -- fix up the order of commands and comparison to curre…
…nt version (travis language is not python here)
c6deb56
@yarikoptic yarikoptic BF: install coverage through pip as well (since we are not lang pytho…
…n -- coverage is not there)
2ddfed6
@yarikoptic yarikoptic yarik wants to see why sudo is needed to run tests (and trigger a new…
… build)
55d8e71
@yarikoptic yarikoptic BF: forgotten fix to use PYSUF for comparison 9c39504
@yarikoptic yarikoptic uff -- pip install failed, so removing -q and splitting into separate…
… calls for coverage and coveralls
c6988cb
@yarikoptic yarikoptic coverage must be installed with sudo 98b1f91
@yarikoptic yarikoptic BF: no sudo for easy_install and pip! 3610882
@yarikoptic yarikoptic BF: grr -- sudo is needed in this setup although it should not be nee…
…ded -- adding back for easy_install and pip
faba47e
@yarikoptic yarikoptic BF: remember the original directory with sources -- and cd back there…
… for tests and coveralls

otherwise coveralls doesn't find the repository: https://travis-ci.org/yarikoptic/pystatsmodels/jobs/11293375
a46d744
@yarikoptic yarikoptic Fixing up yaml 65b9d6e
@yarikoptic yarikoptic Output PWD few times to figure out where/what is happening and run te…
…sts under travis-test directory
f0ae5ed
@yarikoptic yarikoptic BF: since we are testing installed version, do not omit /usr but rath…
…er include all statsmodels packages into coverage
d11bf99
@yarikoptic yarikoptic RF: I think coveralls works now so disabling all those pwd debug prin…
…touts
42fd5b0
@josef-pkt
Member

I don't know what's needed and what's not. When I work on it, I mainly worked with trial and error (like in this PR :)

I got the TravisCI failure messages and was looking at this (and the pymvpa coveralls as example)
I also signed in already with my github account to enable statsmodels and to see if there is anything to do.

Does this get automatically started after merging this PR?

https://coveralls.io/builds/208262 shows for me the full files
the change report might not be available if there are no previous coverage reports.
Also, the coveralls.io website has still some quirky things, looking at it for a bit earlier today.

What I have seen from pymvpa coverage reports is that the changes after a commit or PR are the most interesting features of coveralls.

For the report of total file coverage we might need some tuning if we want to get a more realistic count of the core parts, like excluding some of the sandbox and test folders (e.g. matplotlib tests are not run).
But maybe we don't want to whitewash our numbers. (In any case, that's for later.)

Thanks a lot Yarik, I might have some questions when it's up and running for a while.

continuous testing, a vbench and now continuous coverage, maybe we get serious after all :)

@josef-pkt
Member

BTW: sorting all files by file is more useful to see some gaps in core directories

@yarikoptic
Contributor

On Thu, 12 Sep 2013, Josef Perktold wrote:

I don't know what's needed and what's not.
When I work on it, I mainly
worked with trial and error (like in this PR :)

yeah - quite painful ;) especially since usually it is just yet
additional background task

Does this get automatically started after merging this PR?

... I think so since there is a call to coveralls ... just merge and we
will see ;)

[1]https://coveralls.io/builds/208262 shows for me the full files
the change report might not be available if there are no previous coverage
reports.

ha -- it works again! if I click on "All 385" tab I can get through all
files now... or what did you mean by full?

Also, the coveralls.io website has still some quirky things, looking at it
for a bit earlier today.

yeah... but most of the time works... and thus free cheese ;)

What I have seen from pymvpa coverage reports is that the changes after a
commit or PR are the most interesting features of coveralls.

yeap... although usually boringly steady ;)

For the report of total file coverage we might need some tuning if we want
to get a more realistic count of the core parts, like excluding some of
the sandbox and test folders (e.g. matplotlib tests are not run).

yes -- look at fail2ban -- grep for 'pragma: no cover' ;)

But maybe we don't want to whitewash our numbers. (In any case, that's for
later.)

some places indeed might be worth excluding but not too many indeed

Thanks a lot Yarik, I might have some questions when it's up and running
for a while.

enjoy! glad to help (if only it wasn't this painful ;) )

continuous testing, a vbench and now continuous coverage, maybe we get
serious after all :)

;)

Yaroslav O. Halchenko, Ph.D.
http://neuro.debian.net http://www.pymvpa.org http://www.fail2ban.org
Senior Research Associate, Psychological and Brain Sciences Dept.
Dartmouth College, 419 Moore Hall, Hinman Box 6207, Hanover, NH 03755
Phone: +1 (603) 646-9834 Fax: +1 (603) 646-1419
WWW: http://www.linkedin.com/in/yarik

@josef-pkt
Member

TravisCI did run the test on this without failure, but it doesn't show here.

@jseabold any comment?, this looks ready for the green button

@yarikoptic
Contributor

it could have been due to me not enabling travis for this clone...
Enabled now for future (probably would not run for current one)

On Fri, 13 Sep 2013, Josef Perktold wrote:

TravisCI did run the test on this without failure, but it doesn't show
here.

[1]@jseabold any comment?, this looks ready for the green button


Reply to this email directly or [2]view it on GitHub.

References

Visible links

  1. https://github.com/jseabold
  2. #1082 (comment)

Yaroslav O. Halchenko, Ph.D.
http://neuro.debian.net http://www.pymvpa.org http://www.fail2ban.org
Senior Research Associate, Psychological and Brain Sciences Dept.
Dartmouth College, 419 Moore Hall, Hinman Box 6207, Hanover, NH 03755
Phone: +1 (603) 646-9834 Fax: +1 (603) 646-1419
WWW: http://www.linkedin.com/in/yarik

@jseabold
Member

Looks good to me. Thanks for the very useful addition.

@yarikoptic
Contributor

BTW -- pushed this little copyright tune up to see if travis report appears back here... give it a few minutes, it is running now https://travis-ci.org/neurodebian/statsmodels

@coveralls

Coverage Status

Changes Unknown when pulling 46d2f91 on neurodebian:master into * on statsmodels:master*.

@jseabold
Member

Sounds like we should also start marking what should be tested for coverage more systematically.

http://nedbatchelder.com/code/coverage/excluding.html

@josef-pkt
Member

There is still something strange
Why does travis-ci run in neurodebian and not in statsmodels?
We should see a "good to merge" answer from Travis in this pull request.

Coverage gets uploaded correctlty https://coveralls.io/r/statsmodels/statsmodels

@jseabold
Member

Because that's where the PR is coming from?

@jseabold
Member

I don't know the travis details well, but if testing is enabled for the user branch I think it gets tested there not in statsmodels. That's how pandas is set up.

@josef-pkt
Member

@jseabold

Sounds like we should also start marking what should be tested for coverage more systematically.

see my comments above, We can delay cleaning up coverage.
I was browsing some of the files with medium low coverage again. There is some cleanup work to do, especially with old, incomplete or unused functions (of mine). The coverage report is a good source to show where they are.

@josef-pkt
Member

The travis-ci results are in our organisation also https://travis-ci.org/statsmodels/statsmodels (I just didn't look there first)

@josef-pkt
Member

"Good to merge" is also here. I'm hitting the green button

Thanks again Yarik

@josef-pkt josef-pkt merged commit b548ad3 into statsmodels:master Sep 13, 2013

1 check passed

default The Travis CI build passed
Details
@josef-pkt
Member

I don't know the travis details well, but if testing is enabled for the
user branch I think it gets tested there not in statsmodels. That's how
pandas is set up.

possible, I never looked there.
On the github PR page there is always the link to the test run in the
travis-ci statsmodels organization, and that's the only one I usually check.

Sign up for free to join this conversation on GitHub. Already have an account? Sign in to comment