New issue
Have a question about this project? Sign up for a free GitHub account to open an issue and contact its maintainers and the community.
By clicking “Sign up for GitHub”, you agree to our terms of service and privacy statement. We’ll occasionally send you account related emails.
Already on GitHub? Sign in to your account
Add example to docs that shows lambda X, y: y.isna() #25
Comments
If you have a look at the implementation details you'll notice that internally all the reasons are just objects with That means that you can already check for nan via; DoubtEnsemble(
wrong_pred=lambda X, y: (model.predict(X) != y).astype(float16),
nan_label=lambda X, y: y.isnan(),
) Since the lambda variant is very straightforward I'd prefer not to over-populate the reasons. |
Alright! Yes, makes sense. Thanks! In this case I'll close this issue and maybe you could think if adding this case to the examples can be useful ;) |
An extra segment in the docs along the lines of "useful tricks" might be good, for sure. I'll re-open the issue and change the title so it's a todo. |
merging PR now. |
Hey! First of all: this is a very cool project ;) I have been thinking about potential new "reasons" to doubt and I personally often look into predictions generated by a model whenever the data instance had missing values (and part of the model-pipeline imputes them)... So I wonder if it would be useful to have a
FillNaNReason
(or something similar) based, for example in the MissingIndicator transformer.The text was updated successfully, but these errors were encountered: