Skip to content
New issue

Have a question about this project? Sign up for a free GitHub account to open an issue and contact its maintainers and the community.

By clicking “Sign up for GitHub”, you agree to our terms of service and privacy statement. We’ll occasionally send you account related emails.

Already on GitHub? Sign in to your account

Tests getting skipped #1437

Closed
dsherry opened this issue Nov 13, 2020 · 3 comments
Closed

Tests getting skipped #1437

dsherry opened this issue Nov 13, 2020 · 3 comments
Labels
bug Issues tracking problems with existing features. testing Issues related to testing.

Comments

@dsherry
Copy link
Contributor

dsherry commented Nov 13, 2020

@christopherbunn and I noticed a bunch of tests get skipped, on both the core and non-core unit test runs.

Here's a job where we noticed that. Here's the core=false and core=true runs on linux 3.8.

We searched the output for one test which we noticed was failing:

evalml/tests/pipeline_tests/test_time_series_pipeline.py::test_fit_drop_nans_before_estimator[TimeSeriesRegressionPipeline-Random Forest Regressor-0-2-False-True]

It was marked as skipped for both unit test jobs! That doesn't make sense.

@dsherry dsherry added bug Issues tracking problems with existing features. testing Issues related to testing. labels Nov 13, 2020
@freddyaboulton
Copy link
Contributor

freddyaboulton commented Nov 13, 2020

@dsherry That test should be getting skipped regardless of whether it's core dependencies or not. False means delayed features are not computed and True means only the y target is passed in. This is the discussion as to why we should skip on the time series PR.

If we have a convention that we should skip tests only if it's a core dependencies issue, then I didn't know hehe. In my defense, there are a lot of test parameters we want to test and sometimes it's easier to compute the product of all parameters and skip the ones that don't make sense rather than individually specify only valid combinations.

If it's failing on @christopherbunn 's branch then I don't think that's any different from what we saw last week where tests that were not related to his change were mysteriously failing?

@dsherry
Copy link
Contributor Author

dsherry commented Nov 19, 2020

Thanks @freddyaboulton , got it, that makes sense RE that specific test. Nope, we never saw that particular test failing, on main or on Chris's branch, so there's no issue there.

Do you think that applies to the other ~500 or so tests? I suppose the action item for this issue is to get a list of which tests are being skipped, pick a handful of them to dig into and understand why they're getting skipped. Let's see what comes back from that. If the answer is they're skipped on purpose for a good reason, like the test we were discussing, that's ok.

@dsherry
Copy link
Contributor Author

dsherry commented Jan 7, 2021

The explanation for why the specific unit test above was always skipped makes sense (pytest parametrize), so let's close this.

@dsherry dsherry closed this as completed Jan 7, 2021
Sign up for free to join this conversation on GitHub. Already have an account? Sign in to comment
Labels
bug Issues tracking problems with existing features. testing Issues related to testing.
Projects
None yet
Development

No branches or pull requests

2 participants