.. py:module:: dit.multivariate.caekl_mutual_information
The Chan-AlBashabsheh-Ebrahimi-Kaced-Liu mutual information :cite:`chan2015multivariate` is one possible generalization of the :ref:`mutual_information`.
\J{X_{0:n}} is the smallest \gamma such that:
\H{X_{0:n}} - \gamma = \sum_{C \in \mathcal{P}} \left[ \H{X_C} - \gamma \right]
for some non-trivial partition \mathcal{P} of \left\{0:n\right\}. For example, the CAEKL mutual information for the xor
distribution is \frac{1}{2}, because the joint entropy is 2 bits, each of the three marginals is 1 bit, and 2 - \frac{1}{2} = 3 (1 - \frac{1}{2}).
.. ipython:: In [1]: from dit.multivariate import caekl_mutual_information as J In [2]: d = dit.example_dists.Xor() @doctest float In [3]: J(d) Out[3]: 0.5
A more concrete way of defining the CAEKL mutual information is:
\J{X_{0:n}} = \min_{\mathcal{P} \in \Pi} ~ \operatorname{I}_\mathcal{P}\left[X_{0:n}\right]
where \operatorname{I}_\mathcal{P} is the :ref:`total_correlation` of the partition:
\operatorname{I}_\mathcal{P}\left[X_{0:n}\right] = \sum_{C \in \mathcal{P}} \H{X_C} - \H{X_{0:n}}
and \Pi is the set of all non-trivial partitions of \left\{0:n\right\}.
.. todo:: Include a nice i-diagram of this quantity, if possible.
.. autofunction:: caekl_mutual_information