Skip to content

Conversation

@aoqiz
Copy link
Contributor

@aoqiz aoqiz commented Sep 12, 2022

Update

  • Add auto test to Granger Causality. The ground truth p_value_matrix_truth and adj_matrix_truth are computed from the current version b49980d

Test plan

python -m unittest tests/TestGranger.py # should pass

image

TODO

  • Add tests for more complicated data, such as non-Gaussian data, mix of continuous and discrete data.

Comment on lines +55 to +56
p_value_matrix_truth = np.array([[0, 0.5989, 0, 0.5397], [0.0006, 0, 0.0014, 0]])
adj_matrix_truth = np.array([[1, 0, 1, 0], [1, 1, 1, 1]])
Copy link
Contributor

Choose a reason for hiding this comment

The reason will be displayed to describe this comment to others. Learn more.

could you remind me how you get this truth_value?

Copy link
Contributor Author

Choose a reason for hiding this comment

The reason will be displayed to describe this comment to others. Learn more.

It was computed from the current version 1ebf232

Copy link
Contributor

Choose a reason for hiding this comment

The reason will be displayed to describe this comment to others. Learn more.

cool, please add comments on how the p-values are generated --- same as all other PRs (it's a good practice to check other PRs before writing the PR so that you know what are the best practices we follow currently :) ) This can reduce the review overhead.

Copy link
Contributor Author

Choose a reason for hiding this comment

The reason will be displayed to describe this comment to others. Learn more.

Cool, thanks.

G = Granger()
coeff = G.granger_lasso(data=dataset)
print('Estimated matrix is \n {}'.format(coeff))
coeff_truth = np.array([[0.09, 0.1101, 0.1527, 0.1127, 0.0226, 0.1538],
Copy link
Contributor

Choose a reason for hiding this comment

The reason will be displayed to describe this comment to others. Learn more.

and what about this?

Copy link
Contributor Author

Choose a reason for hiding this comment

The reason will be displayed to describe this comment to others. Learn more.

Same, computed from the current version 1ebf232.

Copy link
Contributor

Choose a reason for hiding this comment

The reason will be displayed to describe this comment to others. Learn more.

same, please add some comments :)

A good mental practice: write code in a way that, if you check the data a year later, do you know what the code means and how the values are generated?

Copy link
Contributor

@tofuwen tofuwen left a comment

Choose a reason for hiding this comment

The reason will be displayed to describe this comment to others. Learn more.

Please add comments, and I think this PR is good

Comment on lines +55 to +56
p_value_matrix_truth = np.array([[0, 0.5989, 0, 0.5397], [0.0006, 0, 0.0014, 0]])
adj_matrix_truth = np.array([[1, 0, 1, 0], [1, 1, 1, 1]])
Copy link
Contributor

Choose a reason for hiding this comment

The reason will be displayed to describe this comment to others. Learn more.

cool, please add comments on how the p-values are generated --- same as all other PRs (it's a good practice to check other PRs before writing the PR so that you know what are the best practices we follow currently :) ) This can reduce the review overhead.

G = Granger()
coeff = G.granger_lasso(data=dataset)
print('Estimated matrix is \n {}'.format(coeff))
coeff_truth = np.array([[0.09, 0.1101, 0.1527, 0.1127, 0.0226, 0.1538],
Copy link
Contributor

Choose a reason for hiding this comment

The reason will be displayed to describe this comment to others. Learn more.

same, please add some comments :)

A good mental practice: write code in a way that, if you check the data a year later, do you know what the code means and how the values are generated?

@tofuwen
Copy link
Contributor

tofuwen commented Sep 27, 2022

And btw, did you check the p-values / coefficient produced by the codebase make sense? I remembered we figured out a bug by noticing p-values not making sense.

@tofuwen
Copy link
Contributor

tofuwen commented Oct 10, 2022

To add comments on how p-values are generated, here is an example: https://github.com/cmu-phil/causal-learn/pull/59/files

Something like: "# All the benchmark results of loaded files (e.g. "./TestData/benchmark_returned_results/") #

are obtained from the code of causal-learn as of commit

129dcdf (07-14-2022).

We are not sure if the results are completely "correct" (reflect ground truth graph) or not.

So if you find your tests failed, it means that your modified code is logically inconsistent

with the code as of 129dcdf, but not necessarily means that your code is "wrong".

If you are sure that your modification is "correct" (e.g. fixed some bugs in 129dcdf),

please report it to us. We will then modify these benchmark results accordingly. Thanks :) #"

Basically the idea is: make sure others know how your p-values are generated (instead of the magic number in code) --- you just need to say we use code in commit xxxx to generate the p-values.

@aoqiz
Copy link
Contributor Author

aoqiz commented Oct 14, 2022

And btw, did you check the p-values / coefficient produced by the codebase make sense? I remembered we figured out a bug by noticing p-values not making sense.

Yes, it make sense! Could you please report that bug? The current version works well to me.

@tofuwen
Copy link
Contributor

tofuwen commented Oct 25, 2022

And btw, did you check the p-values / coefficient produced by the codebase make sense? I remembered we figured out a bug by noticing p-values not making sense.

Yes, it make sense! Could you please report that bug? The current version works well to me.

Oh, I mean whether you checked the resulted p-values make sense or not. Previously we found out KCI p-values didn't make sense, that's why we figured out a bug in KCI, right?

@tofuwen
Copy link
Contributor

tofuwen commented Oct 26, 2022

After you check the p-values and think they make sense, then I can think we can push this PR.

@aoqiz
Copy link
Contributor Author

aoqiz commented Oct 26, 2022

After you check the p-values and think they make sense, then I can think we can push this PR.

Yes, they make sense. I think we can push this PR. @tofuwen @kunwuz Could you please help push this PR? Thanks!

@tofuwen
Copy link
Contributor

tofuwen commented Oct 26, 2022

@kunwuz could you help merge this PR?

@kunwuz kunwuz merged commit b0a6b6f into py-why:main Oct 26, 2022
Sign up for free to join this conversation on GitHub. Already have an account? Sign in to comment

Labels

None yet

Projects

None yet

Development

Successfully merging this pull request may close these issues.

4 participants