Skip to content
Merged
Show file tree
Hide file tree
Changes from all commits
Commits
File filter

Filter by extension

Filter by extension

Conversations
Failed to load comments.
Loading
Jump to
Jump to file
Failed to load files.
Loading
Diff view
Diff view
36 changes: 36 additions & 0 deletions D/ci.md
Original file line number Diff line number Diff line change
@@ -0,0 +1,36 @@
---
layout: definition
mathjax: true

author: "Joram Soch"
affiliation: "BCCN Berlin"
e_mail: "joram.soch@bccn-berlin.de"
date: 2022-03-27 23:56:00

title: "Confidence interval"
chapter: "General Theorems"
section: "Estimation theory"
topic: "Interval estimates"
definition: "Confidence interval"

sources:
- authors: "Wikipedia"
year: 2022
title: "Confidence interval"
in: "Wikipedia, the free encyclopedia"
pages: "retrieved on 2022-03-27"
url: "https://en.wikipedia.org/wiki/Confidence_interval#Definition"

def_id: "D174"
shortcut: "ci"
username: "JoramSoch"
---


**Definition:** Let $y$ be a [random sample](/D/samp) from a [probability distributions](/D/dist) governed by a [parameter](/D/para) of interest $\theta$ and quantities not of interest $\varphi$. A confidence interval for $\theta$ is defined as an interval $[u(y), v(y)]$ determined by the [random variables](/D/rvar) $u(y)$ and $v(y)$ with the property

$$ \label{eq:ci}
\mathrm{Pr}(u(y) < \theta < v(y) \, \vert \, \theta, \varphi) = \gamma \quad \text{for all} \quad (\theta, \varphi) \; .
$$

where $\gamma = 1 - \alpha$ is called the confidence level.
36 changes: 36 additions & 0 deletions D/mse.md
Original file line number Diff line number Diff line change
@@ -0,0 +1,36 @@
---
layout: definition
mathjax: true

author: "Joram Soch"
affiliation: "BCCN Berlin"
e_mail: "joram.soch@bccn-berlin.de"
date: 2022-03-27 23:41:00

title: "Mean squared error"
chapter: "General Theorems"
section: "Estimation theory"
topic: "Point estimates"
definition: "Mean squared error"

sources:
- authors: "Wikipedia"
year: 2022
title: "Estimator"
in: "Wikipedia, the free encyclopedia"
pages: "retrieved on 2022-03-27"
url: "https://en.wikipedia.org/wiki/Estimator#Mean_squared_error"

def_id: "D173"
shortcut: "mse"
username: "JoramSoch"
---


**Definition:** Let $\hat{\theta}$ be an [estimator](/D/est) of an unknown [parameter](/D/para) $\hat{\theta}$ based on measured [data](/D/data) $y$. Then, the mean squared error is defined as the [expected value](/D/mean) of the squared difference between the estimated value and the true value of the parameter:

$$ \label{eq:mse}
\mathrm{MSE} = \mathrm{E}_{\hat{\theta}}\left[ \left( \hat{\theta} - \theta \right)^2 \right] \; .
$$

where $\mathrm{E}_{\hat{\theta}}\left[ \cdot \right]$ is expectation calculated over all possible [samples](/D/samp) $y$ leading to values of $\hat{\theta}$.
9 changes: 6 additions & 3 deletions I/ToC.md
Original file line number Diff line number Diff line change
Expand Up @@ -51,7 +51,8 @@ title: "Table of Contents"
&emsp;&ensp; 1.4.5. **[Range of probability](/P/prob-range)** <br>
&emsp;&ensp; 1.4.6. **[Addition law of probability](/P/prob-add)** <br>
&emsp;&ensp; 1.4.7. **[Law of total probability](/P/prob-tot)** <br>
&emsp;&ensp; 1.4.8. **[Probability of exhaustive events](/P/prob-exh)** <br>
&emsp;&ensp; 1.4.8. **[Probability of exhaustive events](/P/prob-exh)** (1) <br>
&emsp;&ensp; 1.4.9. **[Probability of exhaustive events](/P/prob-exh2)** (2) <br>

1.5. Probability distributions <br>
&emsp;&ensp; 1.5.1. *[Probability distribution](/D/dist)* <br>
Expand Down Expand Up @@ -218,10 +219,12 @@ title: "Table of Contents"
3. Estimation theory

3.1. Point estimates <br>
&emsp;&ensp; 3.1.1. **[Partition of the mean squared error into bias and variance](/P/mse-bnv)** <br>
&emsp;&ensp; 3.1.1. *[Mean squared error](/D/mse)* <br>
&emsp;&ensp; 3.1.2. **[Partition of the mean squared error into bias and variance](/P/mse-bnv)** <br>

3.2. Interval estimates <br>
&emsp;&ensp; 3.2.1. **[Construction of confidence intervals using Wilks' theorem](/P/ci-wilks)** <br>
&emsp;&ensp; 3.2.1. *[Confidence interval](/D/ci)* <br>
&emsp;&ensp; 3.2.2. **[Construction of confidence intervals using Wilks' theorem](/P/ci-wilks)** <br>

4. Frequentist statistics

Expand Down
86 changes: 86 additions & 0 deletions P/prob-exh2.md
Original file line number Diff line number Diff line change
@@ -0,0 +1,86 @@
---
layout: proof
mathjax: true

author: "Joram Soch"
affiliation: "BCCN Berlin"
e_mail: "joram.soch@bccn-berlin.de"
date: 2022-03-27 23:14:00

title: "Probability of exhaustive events"
chapter: "General Theorems"
section: "Probability theory"
topic: "Probability axioms"
theorem: "Probability of exhaustive events"

sources:
- authors: "Alan Stuart & J. Keith Ord"
year: 1994
title: "Probability and Statistical Inference"
in: "Kendall's Advanced Theory of Statistics, Vol. 1: Distribution Theory"
pages: "pp. 288-289"
url: "https://www.wiley.com/en-us/Kendall%27s+Advanced+Theory+of+Statistics%2C+3+Volumes%2C+Set%2C+6th+Edition-p-9780470669549"
- authors: "Wikipedia"
year: 2022
title: "Probability axioms"
in: "Wikipedia, the free encyclopedia"
pages: "retrieved on 2022-03-27"
url: "https://en.wikipedia.org/wiki/Probability_axioms#Consequences"

proof_id: "P319"
shortcut: "prob-exh2"
username: "JoramSoch"
---


**Theorem:** Let $B_1, \ldots, B_n$ be [mutually exclusive](/D/exc) and collectively exhaustive subsets of a [sample space](/D/samp-spc) $\Omega$. Then, their [total probability](/P/prob-tot) is one:

$$ \label{eq:prob-exh}
\sum_i P(B_i) = 1 \; .
$$


**Proof:** The [addition law of probability](/P/prob-add) states that for two [events](/D/reve) $A$ and $B$, the [probability](/D/prob) of at least one of them occuring is:

$$ \label{eq:prob-add}
P(A \cup B) = P(A) + P(B) - P(A \cap B) \; .
$$

Recursively applying this law to the events $B_1, \ldots, B_n$, we have:

$$ \label{eq:prob-all-s1}
\begin{split}
P(B_1 \cup \ldots \cup B_n) &= P(B_1) + P(B_2 \cup \ldots \cup B_n) - P(B_1 \cap [B_2 \cup \ldots \cup B_n]) \\
&= P(B_1) + P(B_2) + P(B_3 \cup \ldots \cup B_n) - P(B_2 \cap [B_3 \cup \ldots \cup B_n])- P(B_1 \cap [B_2 \cup \ldots \cup B_n]) \\
&\;\; \vdots \\
&= P(B_1) + \ldots + P(B_n) - P(B_1 \cap [B_2 \cup \ldots \cup B_n]) - \ldots - P(B_{n-1} \cap B_n) \\
P(\cup_i^n \, B_i) &= \sum_i^n P(B_i) - \sum_i^{n-1} P(B_i \cap [\cup_{j=i+1}^n B_j]) \\
&= \sum_i^n P(B_i) - \sum_i^{n-1} P(\cup_{j=i+1}^n [B_i \cap B_j]) \; .
\end{split}
$$

Because all $B_i$ are mutually exclusive, we have:

$$ \label{eq:B-exclusive}
B_i \cap B_j = \emptyset \quad \text{for all} \quad i \neq j \; .
$$

Since [the probability of the empty set is zero](/P/prob-emp), this means that the second sum on the righ-hand side of \eqref{eq:prob-all-s1} disappears:

$$ \label{eq:prob-all-s2}
P(\cup_i^n \, B_i) = \sum_i^n P(B_i) \; .
$$

Because the $B_i$ are collectively exhaustive, we have:

$$ \label{eq:B-exhaustive}
\cup_i \, B_i = \Omega \; .
$$

Since [the probability of the sample space is one](/D/prob-ax), this means that the left-hand side of \eqref{eq:prob-all-s2} becomes equal to one:

$$ \label{eq:prob-all-s3}
1 = \sum_i^n P(B_i) \; .
$$

This proofs the statement in \eqref{eq:prob-exh}.