Join GitHub today
GitHub is home to over 50 million developers working together to host and review code, manage projects, and build software together.Sign up
This is all the works necessary to support this. See the commit logs for more details but it boils down to:
WARNING: the above requires a bit of change in the way tests are written, in particular don't call
NOTE I do use a soft of ugly and hackish
You can see an example report against my fork at:
@@ Coverage Diff @@ ## master #136 +/- ## ========================================= Coverage ? 35.08% ========================================= Files ? 69 Lines ? 2018 Branches ? 279 ========================================= Hits ? 708 Misses ? 1105 Partials ? 205
after the application code has been instrumented we use nightmare.evaluate() to fetch the data from browser scope back to node scope and save it in a json file under /tmp. This file is later used for reports. Each nightmare sequence must not call .end() and done() because this is done in coverage.js as the last step of the test! NOTE: we're using eval() on the contents of utils/coverage.js because all my attempts to successfully execute this piece of code via a helper module failed! eval() in this scope is equivalent to inserting the contents of utils/coverage.js into the test!
also mount .nyc_output/ under /tmp inside the running container. This will ensure that coverage reports are saved directly onto the host and we can pick them up from .travis.yml Also add --verbose to test runner so we can see what tests have been executed in the CI logs.
we're switching from Coveralls because they don't support merging of coverage results submitted via different build jobs. CodeCov supports this plus it seems they have a better interface.