-
Notifications
You must be signed in to change notification settings - Fork 231
Added some unit tests for PC #44
New issue
Have a question about this project? Sign up for a free GitHub account to open an issue and contact its maintainers and the community.
By clicking “Sign up for GitHub”, you agree to our terms of service and privacy statement. We’ll occasionally send you account related emails.
Already on GitHub? Sign in to your account
Conversation
|
Thanks @MarkDana for the great work! It's a great progress to make causal-learn more reliable! :) And it looks so much better than the original one. Now I am much more confident to refactor other files. :) Some general questions:
Some nits:
Some general process nits that may make the review process more efficient:
Regarding huge code changes, one industry rule is PR with >200 significant lines will be auto-rejected. Of course we don't need to strictly follow this --- but generally making PR small and only contains one small changes each time will make the review easier and tracking easier. :)
Thanks again for the great work! I actually didn't thought you are so productive --- I thought the PR only contains simple simulation tests (which is already great)! But you also finished the benchmark tests --- great!!! |
|
Hi @tofuwen Thanks so much for your comments. Learned a lot :) Regarding your questions:
Me too. And thus it's very likely that the benchmark result files require frequent changes in the near future (as stated in the comment block here). I am still thinking of a convenient way to update those benchmark files and the corresponding MD5 verifications at every usage (e.g. here).
Great suggestion! I'll do it soon. Currently there already seems to be some results returned by Tetrad java code (e.g. tetrad_graph_discrete_10.txt) in
Cool! I'll do so. And we may also need more complex test cases - it may just happen to return the correct graph on simple cases.
I found them at e.g. graph.10.txt. But similar as above, I don't know the respective codes to generate datasets. @jdramsey @kunwuz How about we also include codes to generate e.g. data_linear_10.txt from graph.10.txt?
Except for
It's a nice suggestion for general unit tests, but for the tests included in this PR, if I understand correctly, it seems that we only need to pass each tests? - there is no need to verify or compare results by human, since they are done by assertions. Thanks so much for your greeeatt work👍! |
|
Thanks so much for your hard and elegant work, and thanks Yewen for reviewing and brilliant comments! Regarding the datasets/results you mentioned, if I remember correctly, @chenweiDelight may be familiar with the detailed configuration of them. |
|
Updated files:
tests/TestPC.py, and some .txt files (data, graph, benchmark results, etc.) to load intests/TestData.Test plan:
To test all unit tests:
cd tests/ python -m unittest TestPCOr to run some specific unit test:
The expected result is that all tests are passed.
What these unit tests can do:
TODO:
TestMVPC*.py.stable,uc_rule, anduc_priority) as in this test file.tests/folder.