Skip to content

Commit

Permalink
last fix
Browse files Browse the repository at this point in the history
  • Loading branch information
ericdunipace committed Jan 10, 2024
1 parent f6bee20 commit 119d10d
Show file tree
Hide file tree
Showing 2 changed files with 6 additions and 6 deletions.
6 changes: 3 additions & 3 deletions README.Rmd
Original file line number Diff line number Diff line change
Expand Up @@ -23,16 +23,16 @@ knitr::opts_chunk$set(
The goal of `WpProj` is to perform Wasserstein projections from the predictive distributions of any model into the space of predictive distributions of linear models. We utilize L1 penalties to also reduce the complexity of the model space. This package employs the methods as described in [Eric Dunipace and Lorenzo Trippa (2020).](https://arxiv.org/abs/2012.09999) <arXiv:2012.09999>.

The Wasserstein distance is a measure of distance between two probability distributions. It is defined as:
$$ W_p(\mu,\nu) = \left(\inf_{\pi \in \Pi(\mu,\nu)} \int_{\mathbb{R}^d \times \mathbb{R}^d} \|x-y\|^p d\pi(x,y)\right)^{1/p} $$
$$W_p(\mu,\nu) = \left(\inf_{\pi \in \Pi(\mu,\nu)} \int_{\mathbb{R}^d \times \mathbb{R}^d} \|x-y\|^p d\pi(x,y)\right)^{1/p}$$
where $\Pi(\mu,\nu)$ is the set of all joint distributions with marginals $\mu$ and $\nu$.

In the our package, if $\mu$ is the original prediction from the original model, such as from a Bayesian linear regression or a neural network, then we seek to find a new prediction $\nu$ that minimizes the Wasserstein distance between the two:
$$ \text{argmin}_{\nu} W_{p}(\mu,\nu)^{p}, $$
$$\mathop{\text{argmin}} _ {\nu} W _ {p}(\mu,\nu) ^ {p},$$
subject to the constraint that $\nu$ is a linear model.

To reduce the complexity of the number of parameters, we add an L1 penalty to the coefficients of the linear
model to reduce the complexity of the model space:
$$ \text{argmin}_{\nu} W_{p}(\mu,\nu)^{p} + P_{\lambda}(\nu), $$
$$\mathop{\text{argmin}} _ {\nu} W _ {p}(\mu,\nu) ^ {p} + P_{\lambda}(\nu),$$
where $P_\lambda(\nu)$ is the L1 penalty on the coefficients of the linear model.


Expand Down
6 changes: 3 additions & 3 deletions README.md
Original file line number Diff line number Diff line change
Expand Up @@ -15,21 +15,21 @@ described in [Eric Dunipace and Lorenzo Trippa

The Wasserstein distance is a measure of distance between two
probability distributions. It is defined as:
$$ W_p(\mu,\nu) = \left(\inf_{\pi \in \Pi(\mu,\nu)} \int_{\mathbb{R}^d \times \mathbb{R}^d} \|x-y\|^p d\pi(x,y)\right)^{1/p} $$
$$W_p(\mu,\nu) = \left(\inf_{\pi \in \Pi(\mu,\nu)} \int_{\mathbb{R}^d \times \mathbb{R}^d} \|x-y\|^p d\pi(x,y)\right)^{1/p}$$
where $\Pi(\mu,\nu)$ is the set of all joint distributions with
marginals $\mu$ and $\nu$.

In the our package, if $\mu$ is the original prediction from the
original model, such as from a Bayesian linear regression or a neural
network, then we seek to find a new prediction $\nu$ that minimizes the
Wasserstein distance between the two:
$$ \text{argmin}_{\nu} W_{p}(\mu,\nu)^{p}, $$
$$\mathop{\text{argmin}} _ {\nu} W _ {p}(\mu,\nu) ^ {p},$$
subject to the constraint that $\nu$ is a linear model.

To reduce the complexity of the number of parameters, we add an L1
penalty to the coefficients of the linear model to reduce the complexity
of the model space:
$$ \text{argmin}_{\nu} W_{p}(\mu,\nu)^{p} + P_{\lambda}(\nu), $$
$$\mathop{\text{argmin}} _ {\nu} W _ {p}(\mu,\nu) ^ {p} + P_{\lambda}(\nu),$$
where $P_\lambda(\nu)$ is the L1 penalty on the coefficients of the
linear model.

Expand Down

0 comments on commit 119d10d

Please sign in to comment.