Skip to content

Commit

Permalink
updating readme
Browse files Browse the repository at this point in the history
  • Loading branch information
ericdunipace committed Jan 8, 2024
1 parent b83dc30 commit a9578a2
Show file tree
Hide file tree
Showing 6 changed files with 12 additions and 2 deletions.
6 changes: 5 additions & 1 deletion README.Rmd
Original file line number Diff line number Diff line change
Expand Up @@ -27,7 +27,11 @@ $$W_p(\mu,\nu) = \left(\inf_{\pi \in \Pi(\mu,\nu)} \int_{\mathbb{R}^d \times \ma
where $\Pi(\mu,\nu)$ is the set of all joint distributions with marginals $\mu$ and $\nu$.

In the our package, if $\mu$ is the original prediction from the original model, such as from a Bayesian linear regression or a neural network, then we seek to find a new prediction $\nu$ that minimizes the Wasserstein distance between the two:
$$ \text{argmin}_\nu W_p(\mu,\nu)$$
$$\text{argmin}_\nu W_p(\mu,\nu)^p.$$
subject to the constraint that $\nu$ is a linear model. To reduce the complexity of the number of parameters, we add an L1 penalty to the coefficients of the linear model to reduce the complexity of the model space:
$$\text{argmin}_\nu W_p(\mu,\nu)^p + P_\lambda(\nu),$$
where $P_\lambda(\nu)$ is the L1 penalty on the coefficients of the linear model.


## Installation

Expand Down
8 changes: 7 additions & 1 deletion README.md
Original file line number Diff line number Diff line change
Expand Up @@ -23,7 +23,13 @@ In the our package, if $\mu$ is the original prediction from the
original model, such as from a Bayesian linear regression or a neural
network, then we seek to find a new prediction $\nu$ that minimizes the
Wasserstein distance between the two:
$$ \text{argmin}_\nu W_p(\mu,\nu)$$
$$\text{argmin}_\nu W_p(\mu,\nu)^p.$$ subject to the constraint that
$\nu$ is a linear model. To reduce the complexity of the number of
parameters, we add an L1 penalty to the coefficients of the linear model
to reduce the complexity of the model space:
$$\text{argmin}_\nu W_p(\mu,\nu)^p + P_\lambda(\nu),$$ where
$P_\lambda(\nu)$ is the L1 penalty on the coefficients of the linear
model.

## Installation

Expand Down
Binary file modified man/figures/README-example_continued_plot_noecho-1.png
Sorry, something went wrong. Reload?
Sorry, we cannot display this file.
Sorry, this file is invalid so it cannot be displayed.
Binary file modified man/figures/README-example_continued_plot_noecho-2.png
Sorry, something went wrong. Reload?
Sorry, we cannot display this file.
Sorry, this file is invalid so it cannot be displayed.
Binary file modified man/figures/README-r2_plots_noecho-1.png
Sorry, something went wrong. Reload?
Sorry, we cannot display this file.
Sorry, this file is invalid so it cannot be displayed.
Binary file modified man/figures/README-ridgeplots_noecho-1.png
Sorry, something went wrong. Reload?
Sorry, we cannot display this file.
Sorry, this file is invalid so it cannot be displayed.

0 comments on commit a9578a2

Please sign in to comment.