-
-
Notifications
You must be signed in to change notification settings - Fork 25k
New issue
Have a question about this project? Sign up for a free GitHub account to open an issue and contact its maintainers and the community.
By clicking “Sign up for GitHub”, you agree to our terms of service and privacy statement. We’ll occasionally send you account related emails.
Already on GitHub? Sign in to your account
[MRG] Adds XFAIL/XPASS to common tests #16328
Conversation
I'm biased but I think it's more natural to skip or mark a test a xfail in the place where the calculation is done, rather than keeping a global dict somewhere else. Estimators checks are already complex for new contributors, and this will make it more complex because the behavior of a common check is no longer self contained. Here we mark it based on estimator name and check name, but there could potentially be more logic determining when to call xfail or skip (say something based on parameters of the estimator used, etc). If we could use pytest in common tests, we would just call |
I do not think it is too complicated for a seasoned pytest user. I am thinking of this from a third party user of |
Absolutely, contrib users would have to do this that way, and if there is a way to making it smoother for for them with We are not a contrib user though, we can change code inside check estimaor, and same as we don't have a global list of all tests that are skipped we shouldn't have one for xfail I think. |
I think having special cases inside For example, lets say someone creates a estimator with the name "BernoulliRBM" that passes For the test skips, we are mostly using conditions that do not need a global list:
The only skip depending on name is In my mental testing framework, lets say we have a check that fails. I would mark the Anyways, I am happy with either solution, fundamentally I agree it would be nice to move forward on the PRs mentioned in #16306 (comment) |
I would want to use In the end, I want to see an ecosystem of estimators that pass |
For the identically named estimators, it is indeed a valid use case, but also one has to agree that naming estimators identically to scikit-learn and use scikit-learn tooling on them is not the best idea.
https://github.com/scikit-learn-contrib/scikit-learn-extra/ is looking for maintainers in case you are looking for additional experiences as a contrib developper :) I share your vision that we should make life easier for contrib developpers, I'm just saying that I would rather it did not involve making things more annoying for scikit-learn contributors. Personally, when I work on one of the above linked PRs, I have one tab for the common check, one tab for the estimator code. I don't think that having to open The idea that common checks should not contain any scikit-learn specific logic is laudable, but I don't think very realistic in the near future. That's why we wanted to add
On the opposite, I would go to the checks code to see what failed exactly, and while I'm there I can as well mark it as xfail. For contrib users, in my experience one would as much skip common tests than mark them as xfail, which would need proper documentation in any case. We have skip tests for docstrings in In any case, we need to convince some other reviewer to approve this or the other PR, it's not too important in the end and both would solve the immediate problem :) |
Reference Issues/PRs
Alternative to #16306
What does this implement/fix? Explain your changes.
Places the xfail into the test themselves.
Any other comments?
check_methods_subset_invariance
feels like it should be broken down into multiple tests - one for each method.CC @rth