New issue
Have a question about this project? Sign up for a free GitHub account to open an issue and contact its maintainers and the community.
By clicking “Sign up for GitHub”, you agree to our terms of service and privacy statement. We’ll occasionally send you account related emails.
Already on GitHub? Sign in to your account
[ENH] idea: testing environments by estimator #5719
Comments
Can you please explain a bit more details please? I want to understand it better first. When I first read, I interpreted following happens.
After a second read, I am not sure at all. Can you please share the steps you are planning (in python and in CI yaml)? If possible, please tell me at what step my above understanding went wrong and it'll be easier for me to follow. |
Yes, I think you got it right what I meat, except for step 7. The dynamic output should be: Part 1: find all estimators that are affected by the change (affected, e.g., via inheritance etc) Part 2: create enviroments ad run tests
In most cases, only one estimator is affected, and then it is run for the product of python version and OS, with the current primary satisfying environment, i.e., package versios installed satisfyig the estimator's requirements. |
Here's an idea to achieve this from a different discussion:
This ideally should work with definite guarantee with correct parsin of only dedidated blocks, but with a slight chance of false positive is already addressed by @fkiraly in #5727 using |
…yproject.toml` (#5727) This PR adds a condition to differential testing, so classes whose dependencies have been updated in `pyproject.toml` are always tested. This logic is based on an utility that determines which package dependencies are changed by a pull request, and adds a condition The utility could further be useful in: * hypothetical test environment setup per estimator, such as discussed in #5719
An orthogonal idea for testing, FYI @yarnabrina:
If I write you python code that retrieves:
Would it be easy to set up CI that runs tests specific to these estimators? Say, if this is controllable via a
pytest
flag?I think this is the only setup that truly scales with number of estimators going to infinity, because ultimately task specific modules will have the same problem of interacting dependency trees.
The text was updated successfully, but these errors were encountered: