diff --git a/D/cfm.md b/D/cfm.md
new file mode 100644
index 00000000..f1780cac
--- /dev/null
+++ b/D/cfm.md
@@ -0,0 +1,43 @@
+---
+layout: definition
+mathjax: true
+
+author: "Joram Soch"
+affiliation: "BCCN Berlin"
+e_mail: "joram.soch@bccn-berlin.de"
+date: 2021-10-21 17:01
+
+title: "General linear model"
+chapter: "Statistical Models"
+section: "Multivariate normal data"
+topic: "Inverse general linear model"
+definition: "Corresponding forward model"
+
+sources:
+ - authors: "Haufe S, Meinecke F, Görgen K, Dähne S, Haynes JD, Blankertz B, Bießmann F"
+ year: 2014
+ title: "On the interpretation of weight vectors of linear models in multivariate neuroimaging"
+ in: "NeuroImage"
+ pages: "vol. 87, pp. 96–110, eq. 3"
+ url: "https://www.sciencedirect.com/science/article/pii/S1053811913010914"
+ doi: "10.1016/j.neuroimage.2013.10.067"
+
+def_id: "D162"
+shortcut: "cfm"
+username: "JoramSoch"
+---
+
+
+**Definition:** Let there be observations $Y \in \mathbb{R}^{n \times v}$ and $X \in \mathbb{R}^{n \times p}$ and consider a weight matrix $W = f(Y,X) \in \mathbb{R}^{v \times p}$ estimated from $Y$ and $X$, such that right-multiplying $Y$ with the weight matrix gives an estimate or prediction of $X$:
+
+$$ \label{eq:bda}
+\hat{X} = Y W \; .
+$$
+
+Given that the columns of $\hat{X}$ are linearly independent, then
+
+$$ \label{eq:cfm}
+Y = \hat{X} A^\mathrm{T} + E \quad \text{with} \quad \hat{X}^\mathrm{T} E = 0
+$$
+
+is called the corresponding forward model relative to the weight matrix $W$.
\ No newline at end of file
diff --git a/D/iglm.md b/D/iglm.md
new file mode 100644
index 00000000..a225d27a
--- /dev/null
+++ b/D/iglm.md
@@ -0,0 +1,37 @@
+---
+layout: definition
+mathjax: true
+
+author: "Joram Soch"
+affiliation: "BCCN Berlin"
+e_mail: "joram.soch@bccn-berlin.de"
+date: 2021-10-21 15:31:00
+
+title: "General linear model"
+chapter: "Statistical Models"
+section: "Multivariate normal data"
+topic: "Inverse general linear model"
+definition: "Definition"
+
+sources:
+ - authors: "Soch J, Allefeld C, Haynes JD"
+ year: 2020
+ title: "Inverse transformed encoding models – a solution to the problem of correlated trial-by-trial parameter estimates in fMRI decoding"
+ in: "NeuroImage"
+ pages: "vol. 209, art. 116449, Appendix C"
+ url: "https://www.sciencedirect.com/science/article/pii/S1053811919310407"
+ doi: "10.1016/j.neuroimage.2019.116449"
+
+def_id: "D161"
+shortcut: "iglm"
+username: "JoramSoch"
+---
+
+
+**Definition:** Let there be a [general linear models](/D/glm) of measured data $Y \in \mathbb{R}^{n \times v}$ in terms of the [design matrix](/D/glm) $X \in \mathbb{R}^{n \times p}$:
+
+$$ \label{eq:glm}
+Y = X B + E, \; E \sim \mathcal{MN}(0, V, \Sigma) \; .
+$$
+
+Then, a [linear model](/D/glm) of $X$ in terms of $Y$, under the assumption of \eqref{eq:glm}, is called an inverse general linear model.
\ No newline at end of file
diff --git a/D/tglm.md b/D/tglm.md
new file mode 100644
index 00000000..46bb74df
--- /dev/null
+++ b/D/tglm.md
@@ -0,0 +1,49 @@
+---
+layout: definition
+mathjax: true
+
+author: "Joram Soch"
+affiliation: "BCCN Berlin"
+e_mail: "joram.soch@bccn-berlin.de"
+date: 2021-10-21 14:43:00
+
+title: "General linear model"
+chapter: "Statistical Models"
+section: "Multivariate normal data"
+topic: "Transformed general linear model"
+definition: "Definition"
+
+sources:
+ - authors: "Soch J, Allefeld C, Haynes JD"
+ year: 2020
+ title: "Inverse transformed encoding models – a solution to the problem of correlated trial-by-trial parameter estimates in fMRI decoding"
+ in: "NeuroImage"
+ pages: "vol. 209, art. 116449, Appendix A"
+ url: "https://www.sciencedirect.com/science/article/pii/S1053811919310407"
+ doi: "10.1016/j.neuroimage.2019.116449"
+
+def_id: "D160"
+shortcut: "tglm"
+username: "JoramSoch"
+---
+
+
+**Definition:** Let there be two [general linear models](/D/glm) of measured data $Y \in \mathbb{R}^{n \times v}$ using [design matrices](/D/glm) $X \in \mathbb{R}^{n \times p}$ and $X_t \in \mathbb{R}^{n \times t}$
+
+$$ \label{eq:glm1}
+Y = X B + E, \; E \sim \mathcal{MN}(0, V, \Sigma)
+$$
+
+$$ \label{eq:glm2}
+Y = X_t \Gamma + E_t, \; E_t \sim \mathcal{MN}(0, V, \Sigma_t)
+$$
+
+and assume that $X_t$ can be transformed into $X$ using a transformation matrix $T \in \mathbb{R}^{t \times p}$
+
+$$ \label{eq:X-Xt-T}
+X = X_t \, T
+$$
+
+where $p < t$ and $X$, $X_t$ and $T$ have full ranks $\mathrm{rk}(X) = p$, $\mathrm{rk}(X_t) = t$ and $\mathrm{rk}(T) = p$.
+
+Then, a [linear model](/D/glm) of the parameter estimates from \eqref{eq:glm2}, under the assumption of \eqref{eq:glm1}, is called a transformed general linear model.
\ No newline at end of file
diff --git a/I/Table_of_Contents.md b/I/Table_of_Contents.md
index af29b209..840a6769 100644
--- a/I/Table_of_Contents.md
+++ b/I/Table_of_Contents.md
@@ -512,10 +512,23 @@ title: "Table of Contents"
2.1.3. **[Weighted least squares](/P/glm-wls)**
2.1.4. **[Maximum likelihood estimation](/P/glm-mle)**
- 2.2. Multivariate Bayesian linear regression
- 2.2.1. **[Conjugate prior distribution](/P/mblr-prior)**
- 2.2.2. **[Posterior distribution](/P/mblr-post)**
- 2.2.3. **[Log model evidence](/P/mblr-lme)**
+ 2.2. Transformed general linear model
+ 2.2.1. *[Definition](/D/tglm)*
+ 2.2.2. **[Derivation of the distribution](/P/tglm-dist)**
+ 2.2.3. **[Equivalence of parameter estimates](/P/tglm-para)**
+
+ 2.3. Inverse general linear model
+ 2.3.1. *[Definition](/D/iglm)*
+ 2.3.2. **[Derivation of the distribution](/P/iglm-dist)**
+ 2.3.3. **[Best linear unbiased estimator](/P/iglm-blue)**
+ 2.3.4. *[Corresponding forward model](/D/cfm)*
+ 2.3.5. **[Derivation of parameters](/P/cfm-para)**
+ 2.3.6. **[Proof of existence](/P/cfm-exist)**
+
+ 2.4. Multivariate Bayesian linear regression
+ 2.4.1. **[Conjugate prior distribution](/P/mblr-prior)**
+ 2.4.2. **[Posterior distribution](/P/mblr-post)**
+ 2.4.3. **[Log model evidence](/P/mblr-lme)**
3. Poisson data
diff --git a/P/cfm-exist.md b/P/cfm-exist.md
new file mode 100644
index 00000000..8026846e
--- /dev/null
+++ b/P/cfm-exist.md
@@ -0,0 +1,71 @@
+---
+layout: proof
+mathjax: true
+
+author: "Joram Soch"
+affiliation: "BCCN Berlin"
+e_mail: "joram.soch@bccn-berlin.de"
+date: 2021-10-21 17:43:00
+
+title: "Existence of the corresponding forward model"
+chapter: "Statistical Models"
+section: "Multivariate normal data"
+topic: "Inverse general linear model"
+theorem: "Proof of existence"
+
+sources:
+ - authors: "Haufe S, Meinecke F, Görgen K, Dähne S, Haynes JD, Blankertz B, Bießmann F"
+ year: 2014
+ title: "On the interpretation of weight vectors of linear models in multivariate neuroimaging"
+ in: "NeuroImage"
+ pages: "vol. 87, pp. 96–110, Appendix B"
+ url: "https://www.sciencedirect.com/science/article/pii/S1053811913010914"
+ doi: "10.1016/j.neuroimage.2013.10.067"
+
+proof_id: "P270"
+shortcut: "cfm-exist"
+username: "JoramSoch"
+---
+
+
+**Theorem:** Let there be observations $Y \in \mathbb{R}^{n \times v}$ and $X \in \mathbb{R}^{n \times p}$ and consider a weight matrix $W \in \mathbb{R}^{v \times p}$ predicting $X$ from $Y$:
+
+$$ \label{eq:bda}
+\hat{X} = Y W \; .
+$$
+
+Then, there exists a [corresponding forward model](/D/cfm).
+
+
+**Proof:** The [corresponding forward model](/D/cfm) is defined as
+
+$$ \label{eq:cfm}
+Y = \hat{X} A^\mathrm{T} + E \quad \text{with} \quad \hat{X}^\mathrm{T} E = 0
+$$
+
+and the [parameters of the corresponding forward model](/P/cfm-para) are equal to
+
+$$ \label{eq:cfm-para}
+A = \Sigma_y W \Sigma_x^{-1} \quad \text{where} \quad \Sigma_x = \hat{X}^\mathrm{T} \hat{X} \quad \text{and} \quad \Sigma_y = Y^\mathrm{T} Y \; .
+$$
+
+
+1) Because the columns of $\hat{X}$ are assumed to be linearly independent [by definition of the corresponding forward model](/D/cfm), the matrix $\Sigma_x = \hat{X}^\mathrm{T} \hat{X}$ is invertible, such that $A$ in \eqref{eq:cfm-para} is well-defined.
+
+
+2) Moreover, the solution for the matrix $A$ satisfies the [constraint of the corresponding forward model](/D/cfm) for predicted $X$ and errors $E$ to be uncorrelated which can be shown as follows:
+
+$$ \label{eq:X-E-0}
+\begin{split}
+\hat{X}^\mathrm{T} E &\overset{\eqref{eq:cfm}}{=} \hat{X}^\mathrm{T} \left( Y - \hat{X} A^\mathrm{T} \right) \\
+&\overset{\eqref{eq:cfm-para}}{=} \hat{X}^\mathrm{T} \left( Y - \hat{X} \, \Sigma_x^{-1} W^\mathrm{T} \Sigma_y \right) \\
+&= \hat{X}^\mathrm{T} Y - \hat{X}^\mathrm{T} \hat{X} \, \Sigma_x^{-1} W^\mathrm{T} \Sigma_y \\
+&\overset{\eqref{eq:cfm-para}}{=} \hat{X}^\mathrm{T} Y - \hat{X}^\mathrm{T} \hat{X} \left( \hat{X}^\mathrm{T} \hat{X} \right)^{-1} W^\mathrm{T} \left( Y^\mathrm{T} Y \right) \\
+% &= \hat{X}^\mathrm{T} Y - W^\mathrm{T} \left( Y^\mathrm{T} Y \right) \\
+&\overset{\eqref{eq:bda}}{=} (Y W)^\mathrm{T} Y - W^\mathrm{T} \left( Y^\mathrm{T} Y \right) \\
+&= W^\mathrm{T} Y^\mathrm{T} Y - W^\mathrm{T} Y^\mathrm{T} Y \\
+&= 0 \; .
+\end{split}
+$$
+
+This completes the proof.
\ No newline at end of file
diff --git a/P/cfm-para.md b/P/cfm-para.md
new file mode 100644
index 00000000..80d5a67a
--- /dev/null
+++ b/P/cfm-para.md
@@ -0,0 +1,79 @@
+---
+layout: proof
+mathjax: true
+
+author: "Joram Soch"
+affiliation: "BCCN Berlin"
+e_mail: "joram.soch@bccn-berlin.de"
+date: 2021-10-21 17:20:00
+
+title: "Parameters of the corresponding forward model"
+chapter: "Statistical Models"
+section: "Multivariate normal data"
+topic: "Inverse general linear model"
+theorem: "Derivation of parameters"
+
+sources:
+ - authors: "Haufe S, Meinecke F, Görgen K, Dähne S, Haynes JD, Blankertz B, Bießmann F"
+ year: 2014
+ title: "On the interpretation of weight vectors of linear models in multivariate neuroimaging"
+ in: "NeuroImage"
+ pages: "vol. 87, pp. 96–110, Theorem 1"
+ url: "https://www.sciencedirect.com/science/article/pii/S1053811913010914"
+ doi: "10.1016/j.neuroimage.2013.10.067"
+
+proof_id: "P269"
+shortcut: "cfm-para"
+username: "JoramSoch"
+---
+
+
+**Theorem:** Let there be observations $Y \in \mathbb{R}^{n \times v}$ and $X \in \mathbb{R}^{n \times p}$ and consider a weight matrix $W \in \mathbb{R}^{v \times p}$ predicting $X$ from $Y$:
+
+$$ \label{eq:bda}
+\hat{X} = Y W \; .
+$$
+
+Then, the parameter matrix of the [corresponding forward model](/D/cfm) is equal to
+
+$$ \label{eq:cfm-para}
+A = \Sigma_y W \Sigma_x^{-1}
+$$
+
+with the [sample covariance](/D/cov-samp)
+
+$$ \label{eq:Sx-Sy}
+\begin{split}
+\Sigma_x &= \hat{X}^\mathrm{T} \hat{X} \\
+\Sigma_y &= Y^\mathrm{T} Y \; .
+\end{split}
+$$
+
+
+**Proof:** The [corresponding forward model](/D/cfm) is given by
+
+$$ \label{eq:cfm}
+Y = \hat{X} A^\mathrm{T} + E \; ,
+$$
+
+subject to the constraint that predicted $X$ and errors $E$ are uncorrelated:
+
+$$ \label{eq:cfm-con}
+\hat{X}^\mathrm{T} E = 0 \; .
+$$
+
+With that, we can directly derive the parameter matrix $A$:
+
+$$ \label{eq:cfm-para-qed}
+\begin{split}
+Y &\overset{\eqref{eq:cfm}}{=} \hat{X} A^\mathrm{T} + E \\
+\hat{X} A^\mathrm{T} &= Y - E \\
+\hat{X}^\mathrm{T} \hat{X} A^\mathrm{T} &= \hat{X}^\mathrm{T} (Y - E) \\
+\hat{X}^\mathrm{T} \hat{X} A^\mathrm{T} &= \hat{X}^\mathrm{T} Y - \hat{X}^\mathrm{T} E \\
+\hat{X}^\mathrm{T} \hat{X} A^\mathrm{T} &\overset{\eqref{eq:cfm-con}}{=} \hat{X}^\mathrm{T} Y \\
+\hat{X}^\mathrm{T} \hat{X} A^\mathrm{T} &\overset{\eqref{eq:bda}}{=} W^\mathrm{T} Y^\mathrm{T} Y \\
+\Sigma_x A^\mathrm{T} &\overset{\eqref{eq:Sx-Sy}}{=} W^\mathrm{T} \Sigma_y \\
+A^\mathrm{T} &= \Sigma_x^{-1} W^\mathrm{T} \Sigma_y \\
+A &= \Sigma_y W \Sigma_x^{-1} \; .
+\end{split}
+$$
\ No newline at end of file
diff --git a/P/iglm-blue.md b/P/iglm-blue.md
new file mode 100644
index 00000000..8e0745a7
--- /dev/null
+++ b/P/iglm-blue.md
@@ -0,0 +1,114 @@
+---
+layout: proof
+mathjax: true
+
+author: "Joram Soch"
+affiliation: "BCCN Berlin"
+e_mail: "joram.soch@bccn-berlin.de"
+date: 2021-10-21 16:46:00
+
+title: "Best linear unbiased estimator for the inverse general linear model"
+chapter: "Statistical Models"
+section: "Multivariate normal data"
+topic: "Inverse general linear model"
+theorem: "Best linear unbiased estimator"
+
+sources:
+ - authors: "Soch J, Allefeld C, Haynes JD"
+ year: 2020
+ title: "Inverse transformed encoding models – a solution to the problem of correlated trial-by-trial parameter estimates in fMRI decoding"
+ in: "NeuroImage"
+ pages: "vol. 209, art. 116449, Appendix C, Theorem 5"
+ url: "https://www.sciencedirect.com/science/article/pii/S1053811919310407"
+ doi: "10.1016/j.neuroimage.2019.116449"
+
+proof_id: "P268"
+shortcut: "iglm-blue"
+username: "JoramSoch"
+---
+
+
+**Theorem:** Let there be a [general linear model](/D/glm) of $Y \in \mathbb{R}^{n \times v}$
+
+$$ \label{eq:glm}
+Y = X B + E, \; E \sim \mathcal{MN}(0, V, \Sigma)
+$$
+
+[implying the inverse general linear model](/P/iglm-dist) of $X \in \mathbb{R}^{n \times p}$
+
+$$ \label{eq:iglm}
+X = Y W + N, \; N \sim \mathcal{MN}(0, V, \Sigma_x) \; .
+$$
+
+where
+
+$$ \label{eq:BW-Sx}
+B \, W = I_p \quad \text{and} \quad \Sigma_x = W^\mathrm{T} \Sigma W \; .
+$$
+
+Then, the [weighted least squares solution](/P/glm-wls) for $W$ is the [best linear unbiased estimator](/D/blue) of $W$.
+
+
+**Proof:** The [linear transformation theorem for the matrix-normal distribution](/P/matn-ltt) states:
+
+$$ \label{eq:matn-ltt}
+X \sim \mathcal{MN}(M, U, V) \quad \Rightarrow \quad Y = AXB + C \sim \mathcal{MN}(AMB+C, AUA^\mathrm{T}, B^\mathrm{T}VB) \; .
+$$
+
+The [weighted least squares parameter estimates](/P/glm-wls) for \eqref{eq:iglm} are given by
+
+$$ \label{eq:iglm-wls}
+\hat{W} = (Y^\mathrm{T} V^{-1} Y)^{-1} Y^\mathrm{T} V^{-1} X \; .
+$$
+
+The [best linear unbiased estimator](/D/blue) $\hat{\theta}$ of a certain quantity $\theta$ estimated from [measured data](/D/data) $y$ is 1) an estimator resulting from a linear operation $f(y)$, 2) whose expected value is equal to $\theta$ and 3) which has, among those satisfying 1) and 2), the minimum [variance](/D/var).
+
+
+1) First, $\hat{W}$ is a linear estimator, because it is of the form $\tilde{W} = M \hat{X}$ where $M$ is an arbitrary $v \times n$ matrix.
+
+
+2) Second, $\hat{W}$ is an unbiased estimator, if $\left\langle \hat{W} \right\rangle = W$. By applying \eqref{eq:matn-ltt} to \eqref{eq:iglm}, the distribution of $\tilde{W}$ is
+
+$$ \label{eq:W-hat-dist}
+\tilde{W} = M X \sim \mathcal{MN}(M Y W, M V M^T, \Sigma_x) \;
+$$
+
+which requires that $M Y = I_v$. This is fulfilled by any matrix $M = (Y^\mathrm{T} V^{-1} Y)^{-1} Y^\mathrm{T} V^{-1} + D$ where $D$ is a $v \times n$ matrix which satisfies $D Y = 0$.
+
+
+3) Third, the [best linear unbiased estimator](/D/blue) is the one with minimum [variance](/D/var), i.e. the one that minimizes the expected Frobenius norm
+
+$$ \label{eq:Var-W}
+\mathrm{Var}\left( \tilde{W} \right) = \left\langle \mathrm{tr}\left[ (\tilde{W} - W)^\mathrm{T} (\tilde{W} - W) \right] \right\rangle \; .
+$$
+
+Using the [matrix-normal distribution](/D/matn) of $\tilde{W}$ from \eqref{eq:W-hat-dist}
+
+$$ \label{eq:W-hat-W-dist}
+\left( \tilde{W} - W \right) \sim \mathcal{MN}(0, M V M^T, \Sigma_x)
+$$
+
+and the property of the [Wishart distribution](/D/wish)
+
+$$ \label{eq:E-XX}
+X \sim \mathcal{MN}(0, U, V) \quad \Rightarrow \quad \left\langle X X^T \right\rangle = \mathrm{tr}(V) \, U \; ,
+$$
+
+this [variance](/D/var) can be evaluated as a function of $M$:
+
+$$ \label{eq:Var-M}
+\mathrm{Var}\left[ \tilde{W}(M) \right] = \mathrm{tr}(\Sigma_x) \; \mathrm{tr}(M V M^T) \; .
+$$
+
+As a function of $D$ and using $D Y = 0$, it becomes:
+
+$$ \label{eq:Var-D}
+\begin{split}
+\mathrm{Var}\left[ \tilde{W}(D) \right] &= \mathrm{tr}(\Sigma_x) \; \mathrm{tr}\!\left[ \left( (Y^\mathrm{T} V^{-1} Y)^{-1} Y^\mathrm{T} V^{-1} + D \right) V \left( (Y^\mathrm{T} V^{-1} Y)^{-1} Y^\mathrm{T} V^{-1} + D \right)^\mathrm{T} \right] \\
+&= \mathrm{tr}(\Sigma_x) \; \mathrm{tr}\!\left[ (Y^\mathrm{T} V^{-1} Y)^{-1} \, Y^\mathrm{T} V^{-1} V V^{-1} Y \; (Y^\mathrm{T} V^{-1} Y)^{-1} + \right. \\
+&\hphantom{=\mathrm{tr}(\Sigma_x) \; \mathrm{tr}\!\left[\right.} \left. \, (Y^\mathrm{T} V^{-1} Y)^{-1} Y^\mathrm{T} V^{-1} V D^\mathrm{T} + D V V^{-1} Y (Y^\mathrm{T} V^{-1} Y)^{-1} + D V D^\mathrm{T} \right] \\
+&= \mathrm{tr}(\Sigma_x) \left[ \mathrm{tr}\!\left( (Y^\mathrm{T} V^{-1} Y)^{-1} \right) + \mathrm{tr}\!\left( D V D^\mathrm{T} \right) \right] \; .
+\end{split}
+$$
+
+Since $D V D^\mathrm{T}$ is a positive-semidefinite matrix, all its eigenvalues are non-negative. Because the trace of a square matrix is the sum of its eigenvalues, the mimimum variance is achieved by $D = 0$, thus producing $\hat{W}$ as in \eqref{eq:iglm-wls}.
\ No newline at end of file
diff --git a/P/iglm-dist.md b/P/iglm-dist.md
new file mode 100644
index 00000000..fa22a8cf
--- /dev/null
+++ b/P/iglm-dist.md
@@ -0,0 +1,70 @@
+---
+layout: proof
+mathjax: true
+
+author: "Joram Soch"
+affiliation: "BCCN Berlin"
+e_mail: "joram.soch@bccn-berlin.de"
+date: 2021-10-21 16:03:00
+
+title: "Distribution of the inverse general linear model"
+chapter: "Statistical Models"
+section: "Multivariate normal data"
+topic: "Inverse general linear model"
+theorem: "Derivation of the distribution"
+
+sources:
+ - authors: "Soch J, Allefeld C, Haynes JD"
+ year: 2020
+ title: "Inverse transformed encoding models – a solution to the problem of correlated trial-by-trial parameter estimates in fMRI decoding"
+ in: "NeuroImage"
+ pages: "vol. 209, art. 116449, Appendix C, Theorem 4"
+ url: "https://www.sciencedirect.com/science/article/pii/S1053811919310407"
+ doi: "10.1016/j.neuroimage.2019.116449"
+
+proof_id: "P267"
+shortcut: "iglm-dist"
+username: "JoramSoch"
+---
+
+
+**Theorem:** Let there be a [general linear model](/D/glm) of $Y \in \mathbb{R}^{n \times v}$
+
+$$ \label{eq:glm}
+Y = X B + E, \; E \sim \mathcal{MN}(0, V, \Sigma) \; .
+$$
+
+Then, the [inverse general linear model](/D/iglm) of $X \in \mathbb{R}^{n \times p}$ is given by
+
+$$ \label{eq:iglm}
+X = Y W + N, \; N \sim \mathcal{MN}(0, V, \Sigma_x)
+$$
+
+where $W \in \mathbb{R}^{v \times p}$ is a matrix, such that $B \, W = I_p$, and the covariance across columns is $\Sigma_x = W^\mathrm{T} \Sigma W$.
+
+
+**Proof:** The [linear transformation theorem for the matrix-normal distribution](/P/matn-ltt) states:
+
+$$ \label{eq:matn-ltt}
+X \sim \mathcal{MN}(M, U, V) \quad \Rightarrow \quad Y = AXB + C \sim \mathcal{MN}(AMB+C, AUA^\mathrm{T}, B^\mathrm{T}VB) \; .
+$$
+
+The matrix $W$ exists, if the rows of $B \in \mathbb{R}^{p \times v}$ are linearly independent, such that $\mathrm{rk}(B) = p$. Then, right-multiplying the model \eqref{eq:glm} and applying \eqref{eq:matn-ltt} with $W$ yields
+
+$$ \label{eq:iglm-s1}
+Y W = X B W + E W, \; E W \sim \mathcal{MN}(0, V, W^\mathrm{T} \Sigma W) \; .
+$$
+
+Applying $B \, W = I_p$ and rearranging, we have
+
+$$ \label{eq:iglm-s2}
+X = Y W - E W, \; E W \sim \mathcal{MN}(0, V, W^\mathrm{T} \Sigma W) \; .
+$$
+
+Substituting $N = - E W$, we get
+
+$$ \label{eq:iglm-s3}
+X = Y W + N, \; N \sim \mathcal{MN}(0, V, W^\mathrm{T} \Sigma W)
+$$
+
+which is equivalent to \eqref{eq:iglm}.
\ No newline at end of file
diff --git a/P/tglm-dist.md b/P/tglm-dist.md
new file mode 100644
index 00000000..84cde394
--- /dev/null
+++ b/P/tglm-dist.md
@@ -0,0 +1,90 @@
+---
+layout: proof
+mathjax: true
+
+author: "Joram Soch"
+affiliation: "BCCN Berlin"
+e_mail: "joram.soch@bccn-berlin.de"
+date: 2021-10-21 15:03:00
+
+title: "Distribution of the transformed general linear model"
+chapter: "Statistical Models"
+section: "Multivariate normal data"
+topic: "Transformed general linear model"
+theorem: "Derivation of the distribution"
+
+sources:
+ - authors: "Soch J, Allefeld C, Haynes JD"
+ year: 2020
+ title: "Inverse transformed encoding models – a solution to the problem of correlated trial-by-trial parameter estimates in fMRI decoding"
+ in: "NeuroImage"
+ pages: "vol. 209, art. 116449, Appendix A, Theorem 1"
+ url: "https://www.sciencedirect.com/science/article/pii/S1053811919310407"
+ doi: "10.1016/j.neuroimage.2019.116449"
+
+proof_id: "P265"
+shortcut: "tglm-dist"
+username: "JoramSoch"
+---
+
+
+**Theorem:** Let there be two [general linear models](/D/glm) of measured data $Y$
+
+$$ \label{eq:glm1}
+Y = X B + E, \; E \sim \mathcal{MN}(0, V, \Sigma)
+$$
+
+$$ \label{eq:glm2}
+Y = X_t \Gamma + E_t, \; E_t \sim \mathcal{MN}(0, V, \Sigma_t)
+$$
+
+and a matrix $T$ transforming $X_t$ into $X$:
+
+$$ \label{eq:X-Xt-T}
+X = X_t \, T \; .
+$$
+
+Then, the [transformed general linear model](/D/tglm) is given by
+
+$$ \label{eq:tglm}
+\hat{\Gamma} = T B + H, \; H \sim \mathcal{MN}(0, U, \Sigma)
+$$
+
+where the covariance across rows is $U = ( X_t^\mathrm{T} V^{-1} X_t )^{-1}$.
+
+
+**Proof:** The [linear transformation theorem for the matrix-normal distribution](/P/matn-ltt) states:
+
+$$ \label{eq:matn-ltt}
+X \sim \mathcal{MN}(M, U, V) \quad \Rightarrow \quad Y = AXB + C \sim \mathcal{MN}(AMB+C, AUA^\mathrm{T}, B^\mathrm{T}VB) \; .
+$$
+
+The [weighted least squares parameter estimates](/P/glm-wls) for \eqref{eq:glm2} are given by
+
+$$ \label{eq:glm2-wls}
+\hat{\Gamma} = ( X_t^\mathrm{T} V^{-1} X_t )^{-1} X_t^\mathrm{T} V^{-1} Y \; .
+$$
+
+Using \eqref{eq:glm1} and \eqref{eq:matn-ltt}, the distribution of $Y$ is
+
+$$ \label{eq:Y-dist}
+Y \sim \mathcal{MN}(X B, V, \Sigma)
+$$
+
+Combining \eqref{eq:glm2-wls} with \eqref{eq:Y-dist}, the distribution of $\hat{\Gamma}$ is
+
+$$ \label{eq:G-dist}
+\begin{split}
+\hat{\Gamma} &\sim \mathrm{MN}\left( \left[ ( X_t^\mathrm{T} V^{-1} X_t )^{-1} X_t^\mathrm{T} V^{-1} \right] X B, \left[ ( X_t^\mathrm{T} V^{-1} X_t )^{-1} X_t^\mathrm{T} V^{-1} \right] V \left[ V^{-1} X_t ( X_t^\mathrm{T} V^{-1} X_t )^{-1} \right], \Sigma \right) \\
+&\sim \mathrm{MN}\left( ( X_t^\mathrm{T} V^{-1} X_t )^{-1} X_t^\mathrm{T} V^{-1} X_t \, T B, ( X_t^\mathrm{T} V^{-1} X_t )^{-1} X_t^\mathrm{T} V^{-1} X_t ( X_t^\mathrm{T} V^{-1} X_t )^{-1}, \Sigma \right) \\
+&\sim \mathrm{MN}\left( T B, ( X_t^\mathrm{T} V^{-1} X_t )^{-1}, \Sigma \right) \; .
+\end{split}
+$$
+
+This can also written as
+
+$$ \label{eq:tglm-qed}
+\hat{\Gamma} = T B + H, \; H \sim \mathcal{MN}\left( 0, ( X_t^\mathrm{T} V^{-1} X_t )^{-1}, \Sigma \right)
+$$
+
+which is equivalent to \eqref{eq:tglm}.
\ No newline at end of file
diff --git a/P/tglm-para.md b/P/tglm-para.md
new file mode 100644
index 00000000..6f16fdac
--- /dev/null
+++ b/P/tglm-para.md
@@ -0,0 +1,89 @@
+---
+layout: proof
+mathjax: true
+
+author: "Joram Soch"
+affiliation: "BCCN Berlin"
+e_mail: "joram.soch@bccn-berlin.de"
+date: 2021-10-21 15:25:00
+
+title: "Equivalence of parameter estimates from the transformed general linear model"
+chapter: "Statistical Models"
+section: "Multivariate normal data"
+topic: "Transformed general linear model"
+theorem: "Equivalence of parameter estimates"
+
+sources:
+ - authors: "Soch J, Allefeld C, Haynes JD"
+ year: 2020
+ title: "Inverse transformed encoding models – a solution to the problem of correlated trial-by-trial parameter estimates in fMRI decoding"
+ in: "NeuroImage"
+ pages: "vol. 209, art. 116449, Appendix A, Theorem 2"
+ url: "https://www.sciencedirect.com/science/article/pii/S1053811919310407"
+ doi: "10.1016/j.neuroimage.2019.116449"
+
+proof_id: "P266"
+shortcut: "tglm-para"
+username: "JoramSoch"
+---
+
+
+**Theorem:** Let there be a [general linear model](/D/glm)
+
+$$ \label{eq:glm1}
+Y = X B + E, \; E \sim \mathcal{MN}(0, V, \Sigma)
+$$
+
+and the [transformed general linear model](/D/tglm)
+
+$$ \label{eq:tglm}
+\hat{\Gamma} = T B + H, \; H \sim \mathcal{MN}(0, U, \Sigma)
+$$
+
+which are linked to each other via
+
+$$ \label{eq:glm2-wls}
+\hat{\Gamma} = ( X_t^\mathrm{T} V^{-1} X_t )^{-1} X_t^\mathrm{T} V^{-1} Y
+$$
+
+and
+
+$$ \label{eq:X-Xt-T}
+X = X_t \, T \; .
+$$
+
+Then, the parameter estimates from \eqref{eq:glm1} and \eqref{eq:tglm} are equivalent.
+
+
+**Proof:** The [weighted least squares parameter estimates](/P/glm-wls) for \eqref{eq:glm1} are given by
+
+$$ \label{eq:glm1-wls}
+\hat{B} = (X^\mathrm{T} V^{-1} X)^{-1} X^\mathrm{T} V^{-1} Y
+$$
+
+and the [weighted least squares parameter estimates](/P/glm-wls) for \eqref{eq:tglm} are given by
+
+$$ \label{eq:tglm-wls}
+\hat{B} = (T^\mathrm{T} U^{-1} T)^{-1} T^\mathrm{T} U^{-1} \hat{\Gamma} \; .
+$$
+
+The [covariance across rows for the transformed general linear model](/P/tglm-dist) is equal to
+
+$$ \label{eq:U}
+U = ( X_t^\mathrm{T} V^{-1} X_t )^{-1}
+$$
+
+Applying \eqref{eq:U}, \eqref{eq:X-Xt-T} and \eqref{eq:glm2-wls}, the estimates in \eqref{eq:tglm-wls} can be developed into
+
+$$ \label{eq:tglm-wls-dev}
+\begin{split}
+\hat{B} \; &\overset{\eqref{eq:tglm-wls}}{=} ( T^\mathrm{T} \, U^{-1} \, T )^{-1} \, T^\mathrm{T} \, U^{-1} \, \hat{\Gamma} \\
+&\overset{\eqref{eq:U}}{=} ( T^\mathrm{T} \left[ X_t^\mathrm{T} V^{-1} X_t \right] T )^{-1} \, T^\mathrm{T} \left[ X_t^\mathrm{T} V^{-1} X_t \right] \hat{\Gamma} \\
+&\overset{\eqref{eq:X-Xt-T}}{=} ( X^\mathrm{T} V^{-1} X )^{-1} \, T^\mathrm{T} \, X_t^\mathrm{T} V^{-1} X_t \, \hat{\Gamma} \\
+&\overset{\eqref{eq:glm2-wls}}{=} ( X^\mathrm{T} V^{-1} X )^{-1} \, T^\mathrm{T} \, X_t^\mathrm{T} V^{-1} X_t \left[ ( X_t^\mathrm{T} V^{-1} X_t )^{-1} X_t^\mathrm{T} V^{-1} Y \right] \\
+&= ( X^\mathrm{T} V^{-1} X )^{-1} \, T^\mathrm{T} \, X_t^\mathrm{T} V^{-1} Y \\
+&\overset{\eqref{eq:X-Xt-T}}{=} ( X^\mathrm{T} V^{-1} X )^{-1} X^\mathrm{T} V^{-1} Y
+\end{split}
+$$
+
+which is equivalent to the estimates in \eqref{eq:glm1-wls}.
\ No newline at end of file