-
Notifications
You must be signed in to change notification settings - Fork 231
Granger causality #74
New issue
Have a question about this project? Sign up for a free GitHub account to open an issue and contact its maintainers and the community.
By clicking “Sign up for GitHub”, you agree to our terms of service and privacy statement. We’ll occasionally send you account related emails.
Already on GitHub? Sign in to your account
Conversation
| p_value_matrix_truth = np.array([[0, 0.5989, 0, 0.5397], [0.0006, 0, 0.0014, 0]]) | ||
| adj_matrix_truth = np.array([[1, 0, 1, 0], [1, 1, 1, 1]]) |
There was a problem hiding this comment.
Choose a reason for hiding this comment
The reason will be displayed to describe this comment to others. Learn more.
could you remind me how you get this truth_value?
There was a problem hiding this comment.
Choose a reason for hiding this comment
The reason will be displayed to describe this comment to others. Learn more.
It was computed from the current version 1ebf232
There was a problem hiding this comment.
Choose a reason for hiding this comment
The reason will be displayed to describe this comment to others. Learn more.
cool, please add comments on how the p-values are generated --- same as all other PRs (it's a good practice to check other PRs before writing the PR so that you know what are the best practices we follow currently :) ) This can reduce the review overhead.
There was a problem hiding this comment.
Choose a reason for hiding this comment
The reason will be displayed to describe this comment to others. Learn more.
Cool, thanks.
| G = Granger() | ||
| coeff = G.granger_lasso(data=dataset) | ||
| print('Estimated matrix is \n {}'.format(coeff)) | ||
| coeff_truth = np.array([[0.09, 0.1101, 0.1527, 0.1127, 0.0226, 0.1538], |
There was a problem hiding this comment.
Choose a reason for hiding this comment
The reason will be displayed to describe this comment to others. Learn more.
and what about this?
There was a problem hiding this comment.
Choose a reason for hiding this comment
The reason will be displayed to describe this comment to others. Learn more.
Same, computed from the current version 1ebf232.
There was a problem hiding this comment.
Choose a reason for hiding this comment
The reason will be displayed to describe this comment to others. Learn more.
same, please add some comments :)
A good mental practice: write code in a way that, if you check the data a year later, do you know what the code means and how the values are generated?
tofuwen
left a comment
There was a problem hiding this comment.
Choose a reason for hiding this comment
The reason will be displayed to describe this comment to others. Learn more.
Please add comments, and I think this PR is good
| p_value_matrix_truth = np.array([[0, 0.5989, 0, 0.5397], [0.0006, 0, 0.0014, 0]]) | ||
| adj_matrix_truth = np.array([[1, 0, 1, 0], [1, 1, 1, 1]]) |
There was a problem hiding this comment.
Choose a reason for hiding this comment
The reason will be displayed to describe this comment to others. Learn more.
cool, please add comments on how the p-values are generated --- same as all other PRs (it's a good practice to check other PRs before writing the PR so that you know what are the best practices we follow currently :) ) This can reduce the review overhead.
| G = Granger() | ||
| coeff = G.granger_lasso(data=dataset) | ||
| print('Estimated matrix is \n {}'.format(coeff)) | ||
| coeff_truth = np.array([[0.09, 0.1101, 0.1527, 0.1127, 0.0226, 0.1538], |
There was a problem hiding this comment.
Choose a reason for hiding this comment
The reason will be displayed to describe this comment to others. Learn more.
same, please add some comments :)
A good mental practice: write code in a way that, if you check the data a year later, do you know what the code means and how the values are generated?
|
And btw, did you check the p-values / coefficient produced by the codebase make sense? I remembered we figured out a bug by noticing p-values not making sense. |
|
To add comments on how p-values are generated, here is an example: https://github.com/cmu-phil/causal-learn/pull/59/files Something like: "# All the benchmark results of loaded files (e.g. "./TestData/benchmark_returned_results/") # are obtained from the code of causal-learn as of commit129dcdf (07-14-2022).We are not sure if the results are completely "correct" (reflect ground truth graph) or not.So if you find your tests failed, it means that your modified code is logically inconsistentwith the code as of 129dcdf, but not necessarily means that your code is "wrong".If you are sure that your modification is "correct" (e.g. fixed some bugs in 129dcdf),please report it to us. We will then modify these benchmark results accordingly. Thanks :) #"Basically the idea is: make sure others know how your p-values are generated (instead of the magic number in code) --- you just need to say we use code in commit xxxx to generate the p-values. |
Yes, it make sense! Could you please report that bug? The current version works well to me. |
Oh, I mean whether you checked the resulted p-values make sense or not. Previously we found out KCI p-values didn't make sense, that's why we figured out a bug in KCI, right? |
|
After you check the p-values and think they make sense, then I can think we can push this PR. |
|
@kunwuz could you help merge this PR? |
Update
p_value_matrix_truthandadj_matrix_truthare computed from the current version b49980dTest plan
python -m unittest tests/TestGranger.py # should passTODO