You signed in with another tab or window. Reload to refresh your session.You signed out in another tab or window. Reload to refresh your session.You switched accounts on another tab or window. Reload to refresh your session.Dismiss alert
{{ message }}
This repository has been archived by the owner on Aug 7, 2020. It is now read-only.
Is it possible that some files were causing silent errors (non-existent script references or something similar) that could have affected the total coverage? So, basically, now with 1.0.8 I fixed some such errors and you're actually getting more total statements in your scripts (meaning it covers more scripts that it did before)? Could you actually compare the total statements numbers between reports from 1.0.7 and 1.0.8 (and the set of files reported by Saga, not expected to be reported).
It's just that I'm using it myself for a pretty big project and the coverage numbers didn't change a bit between those versions, and I didn't make any changes to the instrumenter or the actual data collection routines apart from trying to make sure that even if a test fails dramatically, it wouldn't affect other test runs... Could you double-check the numbers for me?
Hi Timur:
It must have been something in what was checked in to our SVN repository. I re-ran the CI job and the coverage numbers did not change. I will close this issue.
Thanks
Mike
On May 19, 2012, at 4:28 AM, Timur Strekalov wrote:
Is it possible that some files were causing silent errors (non-existent script references or something similar) that could have affected the total coverage? So, basically, now with 1.0.8 I fixed some such errors and you're actually getting more total statements in your scripts (meaning it covers more scripts that it did before)? Could you actually compare the total statements numbers between reports from 1.0.7 and 1.0.8 (and the set of files reported by Saga, not expected to be reported).
It's just that I'm using it myself for a pretty big project and the coverage numbers didn't change a bit between those versions, and I didn't make any changes to the instrumenter or the actual data collection routines apart from trying to make sure that even if a test fails dramatically, it wouldn't affect other test runs... Could you double-check the numbers for me?
Reply to this email directly or view it on GitHub: #29 (comment)
Our coverage numbers fell from around 48% to 26% when upgrading from 1.0.7 to 1.0.8 on the same set of specs/javascript files. Any ideas?
The text was updated successfully, but these errors were encountered: