New issue
Have a question about this project? Sign up for a free GitHub account to open an issue and contact its maintainers and the community.
By clicking “Sign up for GitHub”, you agree to our terms of service and privacy statement. We’ll occasionally send you account related emails.
Already on GitHub? Sign in to your account
MAINT: report more accurate coverage results #14653
Conversation
48f59df
to
3ce600d
Compare
.coveragerc
Outdated
@@ -5,3 +5,4 @@ omit = | |||
scipy/setup.py | |||
scipy/*/setup.py | |||
scipy/signal/_max_len_seq_inner.py | |||
**/tests/** |
There was a problem hiding this comment.
Choose a reason for hiding this comment
The reason will be displayed to describe this comment to others. Learn more.
FWIW, my view here is that we should explicitly include the coverage of the unit tests. This can help to discover cases, for example, where two unit tests have the same name and one of them doesn't get executed, etc.
I think the overall coverage results are always going to be a bit of a mess with current tools, but explicitly ignoring some third-party libraries may be useful I suppose. That said, I still like having access to information related to what code actually gets flushed by control flow.
There was a problem hiding this comment.
Choose a reason for hiding this comment
The reason will be displayed to describe this comment to others. Learn more.
FWIW, my view here is that we should explicitly include the coverage of the unit tests. This can help to discover cases, for example, where two unit tests have the same name and one of them doesn't get executed, etc.
Ah, yes. Agreed. But it then overreports because... tests are always run (100% coverage) :) Mainly, I want to make it easier to discover uncovered code and check if a pull request doesn't decrease the overall coverage. I don't think that checking tests for coverage would hurt that much so, I can revert this change.
I think the overall coverage results are always going to be a bit of a mess with current tools, but explicitly ignoring some third-party libraries may be useful I suppose
Sounds good. I will try to add exemptions for third-party code and code generators. I think the coverage reports should be accurate enough after doing that. Then, we can decide if we want to exclude 0% coverage files or not.
There was a problem hiding this comment.
Choose a reason for hiding this comment
The reason will be displayed to describe this comment to others. Learn more.
For this we could just not report these and empty files with the following configuration
[tool.coverage.report]
skip_covered = true
skip_empty = true
👍 This would be useful, we don't see many reviews where we point out to uncovered code. This can also be beneficial for newcomers that could add some tests as first PRs. |
Hmm, I do this pretty often already |
Yes, I've seen this and it's great! But so far I don't remember seeing anybody else doing it. So maybe it's the tooling or the reporting which prevents it? I don't know, but there is definitely something to do. |
It comes in handy. I remember using it to point out the uncovered code. Sometimes, it also helps to find redundant or impossible cases. |
@tirthasheshpatel are you still working on this one? The release of EDIT: I confirm this is what is happening, we had the warning before but it was not crashing anything. I will make a PR to quick fix this and you can improve this later here. |
Yes. Sorry for the delay! I was busy with UNU.RAN stuff but I will complete this one this week.
Thanks for fixing it! I will take a look at it and improve it here. |
No worries, thanks for the update 😃 |
3ce600d
to
e9304c1
Compare
The coverage results are not uploaded at all now. The CI just says:
Same on master |
Actually if you look at the commit/CI history. We stopped to have coverage upload on the 17th of September. @rgommers did we change anything with the tokens on GH or Codecov? I could no see any changes in the code which indicate anything here (versions are the same as well for codecov etc.). Here is a working log https://dev.azure.com/scipy-org/SciPy/_build/results?buildId=14021&view=logs&jobId=71a75959-4ac6-5325-7763-a4017311fdf7&j=71a75959-4ac6-5325-7763-a4017311fdf7&t=4f5b8028-cf04-57f1-c70d-d7e4b72e1502 |
Hmm, that's odd. Not sure why this is happening. Let me try to find what's going on there. |
It's even stranger because the coverage reports are uploaded in the PR of the 17th of September 🤷♂️. Sounds like an update either of Azure or codecov to pass the secrets just happened there. |
BTW, seems like the upload script we are using is deprecated and will be removed in 2022. https://about.codecov.io/blog/introducing-codecovs-new-uploader/ |
I think this might be this actually:
It's a "funny" mechanism to say the least... |
Also, what about using the job |
Why "as well"? We need a single job for code coverage right (running the full test suite), not two or more? Or do you mean moving it there? Then I'd say let's do whatever is easiest. |
Sorry yes I meant moving it there. |
The new uploader is also giving the same error. Looks like something else is causing it. |
A number of other projects have also started having this issue with Codecov recently. For my package (https://github.com/ConorMacBride/mcalf) I tried the new uploader also and it did not fix the issue. I also tried providing the token (which shouldn't be necessary as it's a public project) but that didn't have any effect, both using the new and old uploader. I also created a totally new Azure project and the issue still persisted for me. |
Actually, correction to above: I wasn't adding the token properly, and it now seems to work for my package when I include the token! Technically the token should not be needed, however, I think there's no harm in providing it if it means Codecov works more reliably, provided you can still keep the token secret. |
Thanks for letting us know, @ConorMacBride! @rgommers can you add the codecov token as a secret? I can force push to check if it works. |
This one right, the one that's not needed? Looks like that can optionally be added as a |
Hi @rgommers, yes, that's the correct token. However, I don't think you need to use the Key Vault. As the token is only used by one Pipeline you could be able to add a secret variable to the Pipeline directly using these instructions: https://docs.microsoft.com/en-us/azure/devops/pipelines/process/variables?view=azure-devops&tabs=yaml%2Cbatch#secret-variables |
That was helpful, thanks @ConorMacBride. Done, should be accessible in |
ci/azure-travis-template.yaml
Outdated
# Skip running gcov since we have already done it | ||
./codecov-upload.sh -X gcov | ||
./codecov -X gcov | ||
displayName: 'Upload coverage information' |
There was a problem hiding this comment.
Choose a reason for hiding this comment
The reason will be displayed to describe this comment to others. Learn more.
displayName: 'Upload coverage information' | |
displayName: 'Upload coverage information' | |
env: | |
CODECOV_TOKEN: $(CODECOV_TOKEN) |
That should be enough to make the token accessible to the codecov
script.
There was a problem hiding this comment.
Choose a reason for hiding this comment
The reason will be displayed to describe this comment to others. Learn more.
Commited. Hope this fixes it!
There was a problem hiding this comment.
Choose a reason for hiding this comment
The reason will be displayed to describe this comment to others. Learn more.
Looks like there is an issue with the token [2021-10-05T16:28:14.801Z] ['error'] There was an error running the uploader: Token found by environment variables with length 16 did not pass validation
There was a problem hiding this comment.
Choose a reason for hiding this comment
The reason will be displayed to describe this comment to others. Learn more.
That sounds like CODECOV_TOKEN
was not found and the literal string $(CODECOV_TOKEN)
was passed as the token. Just had a look at the code and I can't see why it's not found.
There was a problem hiding this comment.
Choose a reason for hiding this comment
The reason will be displayed to describe this comment to others. Learn more.
Hi @rgommers, could you double check that the new variable was saved? (I often forget to click the save button as it's hidden at the very bottom right of the screen when in full-screen.) Thank you.
There was a problem hiding this comment.
Choose a reason for hiding this comment
The reason will be displayed to describe this comment to others. Learn more.
There was a problem hiding this comment.
Choose a reason for hiding this comment
The reason will be displayed to describe this comment to others. Learn more.
Okay, sorry. Must be something else. Thanks for checking.
There was a problem hiding this comment.
Choose a reason for hiding this comment
The reason will be displayed to describe this comment to others. Learn more.
I am not sure too what's happening. I will try to hunt azure pipelines docs to find something relevant.
Looks like |
Excellent! It's also working for my package again without |
186b098
to
aecce67
Compare
We can also ping @thomasrockhu on the codecov team for tricky problems like this--I guess we're ok now? |
@tylerjereddy yikes, the newest version has a fix for |
@tirthasheshpatel why reverting to the old uploader? It will not work soon no? |
CI fails with this error:
So, I am just checking if the old uploader is working. If it is, we can use it to track the coverage in this PR and change back to the new uploader before merging or add it in some other PR. |
In case it helps, I still think this would be valuable : ) |
Co-authored-by: Conor MacBride <conor@macbride.me>
4f75779
to
333e456
Compare
Closing this. I will open a follow-up PR once #15958 merges. |
Reference issue
NA
What does this implement/fix?
The coverage results are significantly off because tests and a lot of third-party C/C++/FORTRAN libraries are checked for coverage. It would be nice to have a better idea of how much actual code is covered by tests. This is helpful because it:
_unuran_utils.py
and_boost_utils.py
are misplaced under the_lib
module while they should ideally be under_build_utils
.)One of the concerns might be the maintenance cost incurred by this: I think it should be fairly low because special files that need no testing (like code generators) either follow a common naming convention (e.g.
_generate_pyx
) or are rarely added so no new exemptions should be necessary in the future. Let me know if you think this is worth doing.Apart from tests, we should also ignore:
_generare_pyx.py
)_build_utils
)I will work more on this if others agree this is useful.
Additional information
Full coverage log