Skip to content

Commit

Permalink
v0.14.1 release (#672)
Browse files Browse the repository at this point in the history
  • Loading branch information
jeongyoonlee committed Aug 28, 2023
1 parent a58e838 commit b7dbce5
Show file tree
Hide file tree
Showing 5 changed files with 40 additions and 13 deletions.
31 changes: 31 additions & 0 deletions docs/changelog.rst
Original file line number Diff line number Diff line change
Expand Up @@ -3,6 +3,37 @@
Changelog
=========

0.14.1 (Aug 2023)
-----------------
* This release mainly addressed installation issues and updated documentation accordingly.
* We have 4 new contributors. @bsaunders27, @xhulianoThe1, @zpppy, and @bsaunders23. Thanks for your contributions!

Updates
~~~~~~~
* Update the python-publish workflow file to fix the package publish Gi… by @jeongyoonlee in https://github.com/uber/causalml/pull/633
* Update Cython dependency by @alexander-pv in https://github.com/uber/causalml/pull/640
* Fix for builds on Mac M1 infrastructure by @bsaunders27 in https://github.com/uber/causalml/pull/641
* code cleanups by @xhulianoThe1 in https://github.com/uber/causalml/pull/634
* support valid error early stopping by @zpppy in https://github.com/uber/causalml/pull/614
* fix: update to ``envs/`` conda build for precompiled M1 installs by @bsaunders27 in https://github.com/uber/causalml/pull/646
* Installation updates to README and .github/workflows by @ras44 in https://github.com/uber/causalml/pull/637
* fix: simulate_randomized_trial by @bsaunders23 in https://github.com/uber/causalml/pull/656
* issue 252 by @vincewu51 in https://github.com/uber/causalml/pull/660
* ras44/651 graph viz, resolves #651 by @ras44 in https://github.com/uber/causalml/pull/661
* linted with black by @ras44 in https://github.com/uber/causalml/pull/663
* Fix issue 650 by @vincewu51 in https://github.com/uber/causalml/pull/659
* Install graphviz in the workflow builds by @jeongyoonlee in https://github.com/uber/causalml/pull/668
* Update docs/installation.rst by @jeongyoonlee in https://github.com/uber/causalml/pull/667
* Schedule monthly PyPI install tests by @jeongyoonlee in https://github.com/uber/causalml/pull/670

New contributors
~~~~~~~~~~~~~~~~
* @bsaunders27 made their first contribution in https://github.com/uber/causalml/pull/641
* @xhulianoThe1 made their first contribution in https://github.com/uber/causalml/pull/634
* @zpppy made their first contribution in https://github.com/uber/causalml/pull/614
* @bsaunders23 made their first contribution in https://github.com/uber/causalml/pull/656


0.14.0 (July 2023)
------------------
- CausalML surpassed `2MM downloads <https://pepy.tech/project/causalml>`_ on PyPI and `4,100 stars <https://github.com/uber/causalml/stargazers>`_ on GitHub. Thanks for choosing CausalML and supporting us on GitHub.
Expand Down
14 changes: 5 additions & 9 deletions docs/conf.py
Original file line number Diff line number Diff line change
Expand Up @@ -12,9 +12,9 @@
# All configuration values have a default; values that are commented out
# serve to show the default.

import sys
import os
import matplotlib
import sphinx_rtd_theme
import importlib.metadata

matplotlib.use("agg")

Expand All @@ -33,9 +33,6 @@
# version is used.
# sys.path.insert(0, project_root)

import causalml
import sphinx_rtd_theme

# -- General configuration ---------------------------------------------

# If your documentation needs a minimal Sphinx version, state it here.
Expand Down Expand Up @@ -78,7 +75,6 @@
# the built documents.
#
# The short X.Y version.
import importlib.metadata

version = importlib.metadata.version("causalml")
# The full version, including alpha/beta/rc tags.
Expand Down Expand Up @@ -212,11 +208,11 @@

latex_elements = {
# The paper size ('letterpaper' or 'a4paper').
#'papersize': 'letterpaper',
# 'papersize': 'letterpaper',
# The font size ('10pt', '11pt' or '12pt').
#'pointsize': '10pt',
# 'pointsize': '10pt',
# Additional stuff for the LaTeX preamble.
#'preamble': '',
# 'preamble': '',
}

# Grouping the document tree into LaTeX files. List of tuples
Expand Down
2 changes: 1 addition & 1 deletion docs/interpretation.rst
Original file line number Diff line number Diff line change
Expand Up @@ -86,7 +86,7 @@ Please see below for how to read the plot, and `uplift_tree_visualization.ipynb
- validation uplift score: all the information above is static once the tree is trained (based on the trained trees), while the validation uplift score represents the treatment effect of the testing data when the method fill() is used. This score can be used as a comparison to the training uplift score, to evaluate if the tree has an overfitting issue.

Uplift Tree Feature Importances
-------------------------
-------------------------------

.. code-block:: python
Expand Down
4 changes: 2 additions & 2 deletions docs/validation.rst
Original file line number Diff line number Diff line change
Expand Up @@ -90,7 +90,7 @@ Mechanism 4
|
Validation with Uplift Curve (AUUC)
----------------------------------
-----------------------------------

We can validate the estimation by evaluating and comparing the uplift gains with AUUC (Area Under Uplift Curve), it calculates cumulative gains. Please find more details in `meta_learners_with_synthetic_data.ipynb example notebook <https://github.com/uber/causalml/blob/master/examples/meta_learners_with_synthetic_data.ipynb>`_.

Expand All @@ -115,7 +115,7 @@ We can validate the estimation by evaluating and comparing the uplift gains with
For data with skewed treatment, it is sometimes advantageous to use :ref:`Targeted maximum likelihood estimation (TMLE) for ATE` to generate the AUUC curve for validation, as TMLE provides a more accurate estimation of ATE. Please find `validation_with_tmle.ipynb example notebook <https://github.com/uber/causalml/blob/master/examples/validation_with_tmle.ipynb>`_ for details.

Validation with Sensitivity Analysis
----------------------------------
------------------------------------
Sensitivity analysis aim to check the robustness of the unconfoundeness assumption. If there is hidden bias (unobserved confounders), it detemineds how severe whould have to be to change conclusion by examine the average treatment effect estimation.

We implemented the following methods to conduct sensitivity analysis:
Expand Down
2 changes: 1 addition & 1 deletion pyproject.toml
Original file line number Diff line number Diff line change
@@ -1,6 +1,6 @@
[project]
name = "causalml"
version = "0.14.0"
version = "0.14.1"
description = "Python Package for Uplift Modeling and Causal Inference with Machine Learning Algorithms"
readme = { file = "README.md", content-type = "text/markdown" }

Expand Down

0 comments on commit b7dbce5

Please sign in to comment.