-
-
Notifications
You must be signed in to change notification settings - Fork 1.3k
New issue
Have a question about this project? Sign up for a free GitHub account to open an issue and contact its maintainers and the community.
By clicking “Sign up for GitHub”, you agree to our terms of service and privacy statement. We’ll occasionally send you account related emails.
Already on GitHub? Sign in to your account
[BUG] logic problem with new CI - coverage of TestAllEstimators
and TestAllObjects
incorrect
#6352
Comments
The "opposite case" is where there are changes in "other" modules, as well as in a forecaster. The |
I just commented in Discord as well, this job is not coming from "other" case in new CI. It's coming from old CI relying on the Makefile, specifically this line is being called to run the tests: sktime/.github/workflows/test.yml Line 298 in 3156135
New CI jobs will have been called by this line instead: sktime/.github/workflows/test_other.yml Lines 66 to 82 in 3156135
|
The solution could be, to run We cannot turn them off in "other", as estimators may live in "other" modules. But that will trigger only if an "other" and a "module" estimator is changed in the same PR, so perhaps low incidence. |
I've created PR #6353, please take a look. But based on sample runs on my laptop, this is going to make new CI significantly slower, even the test collection part. For example, the times before and after the change for forecasting with just test collection ( |
Only test collection? Before we merge then, can we diagnose where this is coming from? Collection should certainly not take so long. |
I just checked construction times with the code below. It looks like construction is not the culprit, most likely. from sktime.utils.validation._dependencies import _check_estimator_deps
from sktime.registry import all_estimators
ests = all_estimators(return_names=False)
def _measure_init_time(cls, params=None):
from time import time
start = time()
try:
cls(**params)
except Exception:
pass
end = time()
return end - start
times = []
for est in ests:
if _check_estimator_deps(est, severity="none"):
params = est.get_test_params()
if not isinstance(params, list):
params = [params]
for param in params:
times.append((est, _measure_init_time(est, param))) |
Just in case it's device or OS specific, what are the times for you for the above reported cases? I used these commands: # pre-PR
python -m pytest sktime/forecasting --co
# post-PR
python -m pytest sktime/forecasting sktime/tests --co |
I think construction or soft dependencies do not affect the above two commands I shared. I am not 100% confident though. |
I ran the first commands on main, python 3.11, windows - I cancelled it after it was running for 10 minutes. That is very odd. Last time we measured test collection time, it was 15sec (minimal soft deps), or 1min (many extras) update:
my
|
what are your timings? I also note how strange tihs is, because isn't 600s the stanard timeout for pytest collect? |
I think we are getting closer - I updated the issue #6344 with the problem description. I will now run a profiler on the test collection which takes too long. |
These are my times. But I wonder why's mine so much faster than yours!! Idea: I don't have pytest-xdist or other plugins installed so disabled those in configuration (setup.cfg). Can you try that once? |
Yes, it's configured in setup.cfg. My guess is it is for test execution, not for collection. Don't know how to check that. |
That is very odd. 20 sec are more in line with the times from this older issue: #4900 |
should we move the test times discussion to here? #6344 |
Fixes #6352 This PR adds coverage of `sktime/tests` for per module CI workflows.
I think there is a bug in the new CI. The coverage is not affected since "old CI" is still running.
The issue occurs when, say, a forecaster changes.
The new CI will then trigger tests in
sktime.forecasting
, but not insktime.tests
which containsTestAllEstimator
.So, we run the forecaster specific tests for the changed forecaster, but not the object and estimator specific tests.
FYI @yarnabrina, is there a way to fix this?
The "one liner" to "run all tests for estimator X" is
check_estimator
- however, this is intentionally avoided in the original setup as an entry point since this distributes less well across workers, viapytest-xdist
.The text was updated successfully, but these errors were encountered: