-
-
Notifications
You must be signed in to change notification settings - Fork 1.3k
New issue
Have a question about this project? Sign up for a free GitHub account to open an issue and contact its maintainers and the community.
By clicking “Sign up for GitHub”, you agree to our terms of service and privacy statement. We’ll occasionally send you account related emails.
Already on GitHub? Sign in to your account
[MNT] remove coverage reporting and pytest-cov
from PR CI and setup.cfg
#6363
Conversation
@fkiraly I want to be catious for this. Can we test these:
My caution is mainly for the reason that it's highly counter intuitive to me that coverage will affect timing by this much. It's more than 3-4 times in your screenshots, and if had that been the general effect of pytest-cov, it's expected to be detected by users quite ago. It's very popular and standard, so I am really wondering if we are missing something else (though I don't have any alternative ideas yet). |
According to the profiler, indeed these parts create the overhead. How would you turn these off separately? Can you help? Would it be removing the
This PR also removes the badge from the readme, because it is misleading anyway, with or without this PR. We should find a way to display genuine coverage in the readme (see #5090) - I would consider that a separate issue (namely, #5090), and it would then include adding the correct coverage display to the readme. |
so am I. Or, perhaps cause/effect are hard to detect in general? |
Yes, that only. Let's see what happens.
I think if README shows 0% or etc. it may give potential users/contributors a negative impression that this framework is untested (e.g. I know I'll feel the same for a new tool). |
ok - I've added it back in the |
Only 5 jobs got triggered, not a single testing job! How did it ran everything earlier? |
I see - I think I understand why the difference. Previously, |
https://github.com/sktime/sktime/actions/runs/8940323391 I triggered a manual test all workflow on this branch for debugging. |
Thanks. Sth is taking hours again, how do we find out with which estimator it gets stuck? |
I am not aware of a better solution than going into verbose mode. By the way, have you seen the failures? It seems every single "other" run failed with this:
|
Thanks for pointing this out - this is a bug with a test I added to make sure we test the The bug surfaces only in the |
I checked other jobs, and so far no timeout failure. Only one module job failed and it's for forecasting: FAILED sktime/forecasting/model_evaluation/tests/test_evaluate.py::test_evaluate_common_configs[backend8-scoring1-refit-1-10-fh5-ExpandingWindowSplitter] - OverflowError: Python int too large to convert to C long Ref. https://github.com/sktime/sktime/actions/runs/8940323391/job/24558260007#step:3:6594 Any idea if it's sporadic? We'll probably know from random seed diagnostic. (FYI @benHeid ) |
This is definitely a new one - have not seen this before. However, there have been failures in
Probably not, as that one does not add random seeds except in |
pytest-cov
from PR CIpytest-cov
from PR CI and setup.cfg
@fkiraly @yarnabrina , mentioning this here as it might be related. |
@fkiraly I think you use vs code and use the integrated debugging? Did you ever face the issue @Abhay-Lejith mentioned? I definitely face a lot of issues in our current setup.cfg, so I have a local patch to ignore that file altogether, essentially doing what @Abhay-Lejith has done with VS Cide settings so never faced this issue myself. If this is indeed an issue, as the documentation seem to suggest, I expect this to be a very common thing, and am wondering why no one ever reported it before? 😕 |
Yes, the GUI integrated breakpoint debugging never worked, so I ended up adopting a more manual workflow and also ignoring the file in practice. I applaud @Abhay-Lejith to having finally identified the reason.
Perhaps groupthink bias, i.e., everyone thinks it works for everyone else and they would be considered stupid if they raise it in public? Nothing further from the truth, but sometimes it is how the mind works. Shall we remove the flags then, it seems to cause problems systematically? |
There was a problem hiding this comment.
Choose a reason for hiding this comment
The reason will be displayed to describe this comment to others. Learn more.
If we are disabling coverage everywhere, should we drop them from test dependencies too?
we should probably have some replacement plan in mind, e.g., where and when we run coverage. On the full test run? |
This PR removes generation of coverage reports, installation and use of
pytest-cov
from standard CI. Also removes the (unreliable) coverage badge from the READMEReasons: