Skip to content
Merged
Changes from all commits
Commits
File filter

Filter by extension

Filter by extension

Conversations
Failed to load comments.
Loading
Jump to
Jump to file
Failed to load files.
Loading
Diff view
Diff view
12 changes: 6 additions & 6 deletions Chapter1_Introduction/Chapter1.ipynb
Original file line number Diff line number Diff line change
Expand Up @@ -16,7 +16,7 @@
"and Bayesian Methods for Hackers \n",
"========\n",
"\n",
"#####Version 0.1\n",
"##### Version 0.1\n",
"Welcome to *Bayesian Methods for Hackers*. The full Github repository is available at [github/Probabilistic-Programming-and-Bayesian-Methods-for-Hackers](https://github.com/CamDavidsonPilon/Probabilistic-Programming-and-Bayesian-Methods-for-Hackers). The other chapters can be found on the project's [homepage](https://camdavidsonpilon.github.io/Probabilistic-Programming-and-Bayesian-Methods-for-Hackers/). We hope you enjoy the book, and we encourage any contributions!"
]
},
Expand Down Expand Up @@ -103,7 +103,7 @@
"This is very different from the answer the frequentist function returned. Notice that the Bayesian function accepted an additional argument: *\"Often my code has bugs\"*. This parameter is the *prior*. By including the prior parameter, we are telling the Bayesian function to include our belief about the situation. Technically this parameter in the Bayesian function is optional, but we will see excluding it has its own consequences. \n",
"\n",
"\n",
"####Incorporating evidence\n",
"#### Incorporating evidence\n",
"\n",
"As we acquire more and more instances of evidence, our prior belief is *washed out* by the new evidence. This is to be expected. For example, if your prior belief is something ridiculous, like \"I expect the sun to explode today\", and each day you are proved wrong, you would hope that any inference would correct you, or at least align your beliefs better. Bayesian inference will correct this belief.\n",
"\n",
Expand Down Expand Up @@ -393,7 +393,7 @@
"\n",
"- **$Z$ is mixed**: Mixed random variables assign probabilities to both discrete and continuous random variables, i.e. it is a combination of the above two categories. \n",
"\n",
"###Discrete Case\n",
"### Discrete Case\n",
"If $Z$ is discrete, then its distribution is called a *probability mass function*, which measures the probability $Z$ takes on the value $k$, denoted $P(Z=k)$. Note that the probability mass function completely describes the random variable $Z$, that is, if we know the mass function, we know how $Z$ should behave. There are popular probability mass functions that consistently appear: we will introduce them as needed, but let's introduce the first very useful probability mass function. We say $Z$ is *Poisson*-distributed if:\n",
"\n",
"$$P(Z = k) =\\frac{ \\lambda^k e^{-\\lambda} }{k!}, \\; \\; k=0,1,2, \\dots $$\n",
Expand Down Expand Up @@ -466,7 +466,7 @@
"cell_type": "markdown",
"metadata": {},
"source": [
"###Continuous Case\n",
"### Continuous Case\n",
"Instead of a probability mass function, a continuous random variable has a *probability density function*. This might seem like unnecessary nomenclature, but the density function and the mass function are very different creatures. An example of continuous random variable is a random variable with *exponential density*. The density function for an exponential random variable looks like this:\n",
"\n",
"$$f_Z(z | \\lambda) = \\lambda e^{-\\lambda z }, \\;\\; z\\ge 0$$\n",
Expand Down Expand Up @@ -521,7 +521,7 @@
"metadata": {},
"source": [
"\n",
"###But what is $\\lambda \\;$?\n",
"### But what is $\\lambda \\;$?\n",
"\n",
"\n",
"**This question is what motivates statistics**. In the real world, $\\lambda$ is hidden from us. We see only $Z$, and must go backwards to try and determine $\\lambda$. The problem is difficult because there is no one-to-one mapping from $Z$ to $\\lambda$. Many different methods have been created to solve the problem of estimating $\\lambda$, but since $\\lambda$ is never actually observed, no one can say for certain which method is best! \n",
Expand Down Expand Up @@ -841,7 +841,7 @@
"cell_type": "markdown",
"metadata": {},
"source": [
"###Why would I want samples from the posterior, anyways?\n",
"### Why would I want samples from the posterior, anyways?\n",
"\n",
"\n",
"We will deal with this question for the remainder of the book, and it is an understatement to say that it will lead us to some amazing results. For now, let's end this chapter with one more example.\n",
Expand Down