Skip to content
New issue

Have a question about this project? Sign up for a free GitHub account to open an issue and contact its maintainers and the community.

By clicking “Sign up for GitHub”, you agree to our terms of service and privacy statement. We’ll occasionally send you account related emails.

Already on GitHub? Sign in to your account

interpretation of refutation test results and decision making #1017

Closed
ianisadreamer opened this issue Aug 22, 2023 · 4 comments
Closed

interpretation of refutation test results and decision making #1017

ianisadreamer opened this issue Aug 22, 2023 · 4 comments
Labels
question Further information is requested stale

Comments

@ianisadreamer
Copy link

Ask your question
A clear and concise question, potentially including what was tried and what was observed. This can also include a verbatim copy of outputs, or screenshots.

Expected behavior
If applicable, a clear and concise description of what you expected to happen.

Version information:

  • DoWhy version [e.g. 0.7]

Additional context
Add any other context about the problem here.

@ianisadreamer ianisadreamer added the question Further information is requested label Aug 22, 2023
@ianisadreamer
Copy link
Author

Hi, I'm wondering does an estimate has to pass all the possible refutation tests to give us confidence say "this estimation is not problematic?" my situation is: the estimation passed random cause, subset and bootstrap, but only failed placable. Should I trust this estimate?
From another threads, I saw the author suggested placebo test is more about the estimator. Should I change the estimator to see if it passes?
Thanks!

@amit-sharma
Copy link
Member

amit-sharma commented Aug 22, 2023

These refutations are necessary tests. So, a good analysis should pass all the tests. So if your estimator fails the placebo test, there is something wrong. Placebo test is not just about the estimator, it points to an error anywhere in your analysis: either in the modeling stage (Graph) or the estimation stage.

In practice, changing the estimator is a good first step to debug. If multiple estimators fail the test, you may need to look at your graph too and make sure that it is correct.

For more information, you can refer to the user guide: https://www.pywhy.org/dowhy/v0.10/user_guide/refuting_causal_estimates/index.html

@github-actions
Copy link

github-actions bot commented Sep 6, 2023

This issue is stale because it has been open for 14 days with no activity.

@github-actions github-actions bot added the stale label Sep 6, 2023
@github-actions
Copy link

This issue was closed because it has been inactive for 7 days since being marked as stale.

@github-actions github-actions bot closed this as not planned Won't fix, can't repro, duplicate, stale Sep 14, 2023
Sign up for free to join this conversation on GitHub. Already have an account? Sign in to comment
Labels
question Further information is requested stale
Projects
None yet
Development

No branches or pull requests

2 participants