diff --git a/docs/source/independence_tests_index/chisq.rst b/docs/source/independence_tests_index/chisq.rst index 29d0f10c..beb41b33 100644 --- a/docs/source/independence_tests_index/chisq.rst +++ b/docs/source/independence_tests_index/chisq.rst @@ -5,14 +5,29 @@ Chi-Square test Perform an independence test on discrete variables using Chi-Square test. -(We have updated the independence test class and the usage example hasn't been updated yet. For new class, please refer to `TestCIT.py `_ or `TestCIT_KCI.py `_.) - Usage -------- .. code-block:: python + from causallearn.utils.cit import CIT + chisq_obj = CIT(data, "chisq") # construct a CIT instance with data and method name + pValue = chisq_obj(X, Y, S) + +Please be kindly informed that we have refactored the independence tests from functions to classes since the release `v0.1.2.8 `_. Speed gain and a more flexible parameters specification are enabled. + +For users, you may need to adjust your codes accordingly. Specifically, if you are + ++ running a constraint-based algorithm from end to end: then you don't need to change anything. Old codes are still compatible. For example, +.. code-block:: python + + from causallearn.search.ConstraintBased.PC import pc from causallearn.utils.cit import chisq - p = chisq(data, X, Y, conditioning_set) + cg = pc(data, 0.05, chisq) + ++ explicitly calculating the p-value of a test: then you need to declare the :code:`chisq_obj` and then call it as above, instead of using :code:`chisq(data, X, Y, condition_set)` as before. Note that now :code:`causallearn.utils.cit.chisq` is a string :code:`"chisq"`, instead of a function. + +Please see `CIT.py `_ +for more details on the implementation of the (conditional) independent tests. Parameters @@ -20,10 +35,9 @@ Parameters **data**: numpy.ndarray, shape (n_samples, n_features). Data, where n_samples is the number of samples and n_features is the number of features. -**X, Y and condition_set**: column indices of data. +**method**: string, "chisq". -**G_sq**: True means using G-Square test; - False means using Chi-Square test. +**kwargs**: e.g., :code:`cache_path`. See :ref:`Advanced Usages `. Returns ------------- diff --git a/docs/source/independence_tests_index/fisherz.rst b/docs/source/independence_tests_index/fisherz.rst index 9a99144a..cb8e0072 100644 --- a/docs/source/independence_tests_index/fisherz.rst +++ b/docs/source/independence_tests_index/fisherz.rst @@ -5,24 +5,40 @@ Fisher-z test Perform an independence test using Fisher-z's test [1]_. This test is optimal for linear-Gaussian data. -(We have updated the independence test class and the usage example hasn't been updated yet. For new class, please refer to `TestCIT.py `_ or `TestCIT_KCI.py `_.) - Usage -------- .. code-block:: python + from causallearn.utils.cit import CIT + fisherz_obj = CIT(data, "fisherz") # construct a CIT instance with data and method name + pValue = fisherz_obj(X, Y, S) + +Please be kindly informed that we have refactored the independence tests from functions to classes since the release `v0.1.2.8 `_. Speed gain and a more flexible parameters specification are enabled. + +For users, you may need to adjust your codes accordingly. Specifically, + ++ If you are running a constraint-based algorithm from end to end: then you don't need to change anything. Old codes are still compatible. For example, +.. code-block:: python + + from causallearn.search.ConstraintBased.PC import pc from causallearn.utils.cit import fisherz - p = fisherz(data, X, Y, condition_set, correlation_matrix) + cg = pc(data, 0.05, fisherz) + ++ If you are explicitly calculating the p-value of a test: then you need to declare the :code:`fisherz_obj` and then call it as above, instead of using :code:`fisherz(data, X, Y, condition_set)` as before. Note that now :code:`causallearn.utils.cit.fisherz` is a string :code:`"fisherz"`, instead of a function. + + +Please see `CIT.py `_ +for more details on the implementation of the (conditional) independent tests. Parameters ------------ **data**: numpy.ndarray, shape (n_samples, n_features). Data, where n_samples is the number of samples and n_features is the number of features. -**X, Y and condition_set**: column indices of data. +**method**: string, "fisherz". -**correlation_matrix**: correlation matrix; None means without the parameter of correlation matrix. +**kwargs**: e.g., :code:`cache_path`. See :ref:`Advanced Usages `. Returns ------------- diff --git a/docs/source/independence_tests_index/gsq.rst b/docs/source/independence_tests_index/gsq.rst index 0124d121..9a3bd402 100644 --- a/docs/source/independence_tests_index/gsq.rst +++ b/docs/source/independence_tests_index/gsq.rst @@ -5,24 +5,38 @@ G-Square test Perform an independence test using G-Square test [1]_. This test is based on the log likelihood ratio test. -(We have updated the independence test class and the usage example hasn't been updated yet. For new class, please refer to `TestCIT.py `_ or `TestCIT_KCI.py `_.) - - Usage -------- .. code-block:: python + from causallearn.utils.cit import CIT + gsq_obj = CIT(data, "gsq") # construct a CIT instance with data and method name + pValue = gsq_obj(X, Y, S) + +Please be kindly informed that we have refactored the independence tests from functions to classes since the release `v0.1.2.8 `_. Speed gain and a more flexible parameters specification are enabled. + +For users, you may need to adjust your codes accordingly. Specifically, if you are + ++ running a constraint-based algorithm from end to end: then you don't need to change anything. Old codes are still compatible. For example, +.. code-block:: python + + from causallearn.search.ConstraintBased.PC import pc from causallearn.utils.cit import gsq - p = gsq(data, X, Y, conditioning_set) + cg = pc(data, 0.05, gsq) + ++ explicitly calculating the p-value of a test: then you need to declare the :code:`gsq_obj` and then call it as above, instead of using :code:`gsq(data, X, Y, condition_set)` as before. Note that now :code:`causallearn.utils.cit.gsq` is a string :code:`"gsq"`, instead of a function. + +Please see `CIT.py `_ +for more details on the implementation of the (conditional) independent tests. Parameters ------------- **data**: numpy.ndarray, shape (n_samples, n_features). Data, where n_samples is the number of samples and n_features is the number of features. -**X, Y and condition_set**: column indices of data. +**method**: string, "gsq". -**G_sq**: True means using G-Square test; False means using Chi-Square test. +**kwargs**: e.g., :code:`cache_path`. See :ref:`Advanced Usages `. Returns --------------- diff --git a/docs/source/independence_tests_index/kci.rst b/docs/source/independence_tests_index/kci.rst index 66fddc80..f3f73513 100644 --- a/docs/source/independence_tests_index/kci.rst +++ b/docs/source/independence_tests_index/kci.rst @@ -7,37 +7,65 @@ Kernel-based conditional independence (KCI) test and independence test [1]_. To test if x and y are conditionally or unconditionally independent on Z. For unconditional independence tests, Z is set to the empty set. -(We have updated the independence test class and the usage example hasn't been updated yet. For new class, please refer to `TestCIT.py `_ or `TestCIT_KCI.py `_.) - - Usage -------- .. code-block:: python + from causallearn.utils.cit import CIT + kci_obj = CIT(data, "kci") # construct a CIT instance with data and method name + pValue = kci_obj(X, Y, S) + +The above code runs KCI with the default parameters. Or instead if you would like to specify some parameters of KCI, you may do it by e.g., + +.. code-block:: python + + kci_obj = CIT(data, "kci", kernelZ='Polynomial', approx=False, est_width='median', ...) + +See `KCI.py `_ +for more details on the parameters options of the KCI tests. + + +Please be kindly informed that we have refactored the independence tests from functions to classes since the release `v0.1.2.8 `_. Speed gain and a more flexible parameters specification are enabled. + +For users, you may need to adjust your codes accordingly. Specifically, if you are + ++ running a constraint-based algorithm from end to end: then you don't need to change anything. Old codes are still compatible. For example, +.. code-block:: python + + from causallearn.search.ConstraintBased.PC import pc from causallearn.utils.cit import kci - p = kci(data, X, Y, condition_set, kernelX, kernelY, kernelZ, est_width, polyd, kwidthx, kwidthy, kwidthz) + cg = pc(data, 0.05, kci) + ++ explicitly calculating the p-value of a test: then you need to declare the :code:`kci_obj` and then call it as above, instead of using :code:`kci(data, X, Y, condition_set)` as before. Note that now :code:`causallearn.utils.cit.kci` is a string :code:`"kci"`, instead of a function. + +Please see `CIT.py `_ +for more details on the implementation of the (conditional) independent tests. Parameters -------------- +------------ **data**: numpy.ndarray, shape (n_samples, n_features). Data, where n_samples is the number of samples and n_features is the number of features. -**X, Y, and condition_set**: column indices of data. condition_set could be None. +**method**: string, "kci". + +**kwargs**: -**KernelX/Y/Z (condition_set)**: ['GaussianKernel', 'LinearKernel', 'PolynomialKernel']. -(For 'PolynomialKernel', the default degree is 2. Currently, users can change it by setting the 'degree' of 'class PolynomialKernel()'. ++ Either for specifying parameters of KCI, including: -**est_width**: set kernel width for Gaussian kernels. + **KernelX/Y/Z (condition_set)**: ['GaussianKernel', 'LinearKernel', 'PolynomialKernel']. (For 'PolynomialKernel', the default degree is 2. Currently, users can change it by setting the 'degree' of 'class PolynomialKernel()'. + + **est_width**: set kernel width for Gaussian kernels. - 'empirical': set kernel width using empirical rules (default). - 'median': set kernel width using the median trick. -**polyd**: polynomial kernel degrees (default=2). + **polyd**: polynomial kernel degrees (default=2). + + **kwidthx/y/z**: kernel width for data x/y/z (standard deviation sigma). -**kwidthx**: kernel width for data x (standard deviation sigma). + **and more**: aee `KCI.py `_ for details. -**kwidthy**: kernel width for data y (standard deviation sigma). ++ Or for advanced usages of CIT, e.g., :code:`cache_path`. See :ref:`Advanced Usages `. -**kwidthz**: kernel width for data z (standard deviation sigma). Returns ----------- diff --git a/docs/source/independence_tests_index/mvfisherz.rst b/docs/source/independence_tests_index/mvfisherz.rst index e0773afe..5dd64ca6 100644 --- a/docs/source/independence_tests_index/mvfisherz.rst +++ b/docs/source/independence_tests_index/mvfisherz.rst @@ -6,23 +6,39 @@ Missing-value Fisher-z test Perform a testwise-deletion Fisher-z independence test to data sets with missing values. With testwise-deletion, the test makes use of all data points that do not have missing values for the variables involved in the test. -(We have updated the independence test class and the usage example hasn't been updated yet. For new class, please refer to `TestCIT.py `_ or `TestCIT_KCI.py `_.) - - Usage -------- .. code-block:: python + from causallearn.utils.cit import CIT + mv_fisherz_obj = CIT(data_with_missingness, "mv_fisherz") # construct a CIT instance with data and method name + pValue = mv_fisherz_obj(X, Y, S) + +Please be kindly informed that we have refactored the independence tests from functions to classes since the release `v0.1.2.8 `_. Speed gain and a more flexible parameters specification are enabled. + +For users, you may need to adjust your codes accordingly. Specifically, if you are + ++ running a constraint-based algorithm from end to end: then you don't need to change anything. Old codes are still compatible. For example, +.. code-block:: python + + from causallearn.search.ConstraintBased.PC import pc from causallearn.utils.cit import mv_fisherz - p = mv_fisherz(mvdata, X, Y, condition_set) + cg = pc(data_with_missingness, 0.05, mv_fisherz) + ++ explicitly calculating the p-value of a test: then you need to declare the :code:`mv_fisherz_obj` and then call it as above, instead of using :code:`mv_fisherz(data, X, Y, condition_set)` as before. Note that now :code:`causallearn.utils.cit.mv_fisherz` is a string :code:`"mv_fisherz"`, instead of a function. + +Please see `CIT.py `_ +for more details on the implementation of the (conditional) independent tests. Parameters ---------------- -**mvdata**: numpy.ndarray, shape (n_samples, n_features). Data with missing value, where n_samples is the number of samples +------------ +**data**: numpy.ndarray, shape (n_samples, n_features). Data, where n_samples is the number of samples and n_features is the number of features. -**X, Y and condition_set**: column indices of data. +**method**: string, "mv_fisherz". + +**kwargs**: e.g., :code:`cache_path`. See :ref:`Advanced Usages `. Returns ---------------- diff --git a/docs/source/search_methods_index/Constraint-based causal discovery methods/PC.rst b/docs/source/search_methods_index/Constraint-based causal discovery methods/PC.rst index 23eee23a..80b5bdcf 100644 --- a/docs/source/search_methods_index/Constraint-based causal discovery methods/PC.rst +++ b/docs/source/search_methods_index/Constraint-based causal discovery methods/PC.rst @@ -35,6 +35,33 @@ Usage Visualization using pydot is recommended. If specific label names are needed, please refer to this `usage example `_ (e.g., 'cg.draw_pydot_graph(labels=["A", "B", "C"])' or 'GraphUtils.to_pydot(cg.G, labels=["A", "B", "C"])'). ++++++++++++++++ +Advanced Usages ++++++++++++++++ ++ If you would like to specify parameters for the (conditional) independence test (if available), you may directly pass the parameters to the :code:`pc` call. E.g., + + .. code-block:: python + + from causallearn.search.ConstraintBased.PC import pc + from causallearn.utils.cit import kci + cg = pc(data, 0.05, kci, kernelZ='Polynomial', approx=False, est_width='median', ...) + ++ If your graph is big and/or your independence test is slow (e.g., KCI), you may want to cache the p-value results to a local checkpoint. Then by reading values from this local checkpoint, no more repeated calculation will be wasted to resume from checkpoint / just finetune some PC parameters. This can be achieved by specifying :code:`cache_path`. E.g., + + .. code-block:: python + + citest_cache_file = "/my/path/to/citest_cache_dataname_kci.json" # .json file + cg1 = pc(data, 0.05, kci, cache_path=citest_cache_file) # after the long run + + # just finetune uc_rule. p-values are reused, and thus cg2 is done in almost no time. + cg2 = pc(data, 0.05, kci, cache_path=citest_cache_file, uc_rule=1) + .. + + If :code:`cache_path` does not exist in your local file system, a new one will be created. Otherwise, the cache will be first loaded from the json file to the CIT class and used during the runtime. Note that 1) data hash and parameters hash will first be checked at loading to ensure consistency, and 2) during runtime, the cache will be saved to the local file every 30 seconds. + ++ The above advanced usages also apply to other constraint-based methods, e.g., FCI and CDNOD. + + Parameters ------------------- **data**: numpy.ndarray, shape (n_samples, n_features). Data, where n_samples is the number of samples @@ -42,7 +69,7 @@ and n_features is the number of features. **alpha**: desired significance level (float) in (0, 1). Default: 0.05. -**indep_test**: Independence test method function. Default: 'fisherz'. +**indep_test**: string, name of the independence test method. Default: 'fisherz'. - ":ref:`fisherz `": Fisher's Z conditional independence test. - ":ref:`chisq `": Chi-squared conditional independence test. - ":ref:`gsq `": G-squared conditional independence test.