|
16 | 16 | "and Bayesian Methods for Hackers \n",
|
17 | 17 | "========\n",
|
18 | 18 | "\n",
|
19 |
| - "#####Version 0.1\n", |
| 19 | + "##### Version 0.1\n", |
20 | 20 | "Welcome to *Bayesian Methods for Hackers*. The full Github repository is available at [github/Probabilistic-Programming-and-Bayesian-Methods-for-Hackers](https://github.com/CamDavidsonPilon/Probabilistic-Programming-and-Bayesian-Methods-for-Hackers). The other chapters can be found on the project's [homepage](https://camdavidsonpilon.github.io/Probabilistic-Programming-and-Bayesian-Methods-for-Hackers/). We hope you enjoy the book, and we encourage any contributions!"
|
21 | 21 | ]
|
22 | 22 | },
|
|
103 | 103 | "This is very different from the answer the frequentist function returned. Notice that the Bayesian function accepted an additional argument: *\"Often my code has bugs\"*. This parameter is the *prior*. By including the prior parameter, we are telling the Bayesian function to include our belief about the situation. Technically this parameter in the Bayesian function is optional, but we will see excluding it has its own consequences. \n",
|
104 | 104 | "\n",
|
105 | 105 | "\n",
|
106 |
| - "####Incorporating evidence\n", |
| 106 | + "#### Incorporating evidence\n", |
107 | 107 | "\n",
|
108 | 108 | "As we acquire more and more instances of evidence, our prior belief is *washed out* by the new evidence. This is to be expected. For example, if your prior belief is something ridiculous, like \"I expect the sun to explode today\", and each day you are proved wrong, you would hope that any inference would correct you, or at least align your beliefs better. Bayesian inference will correct this belief.\n",
|
109 | 109 | "\n",
|
|
393 | 393 | "\n",
|
394 | 394 | "- **$Z$ is mixed**: Mixed random variables assign probabilities to both discrete and continuous random variables, i.e. it is a combination of the above two categories. \n",
|
395 | 395 | "\n",
|
396 |
| - "###Discrete Case\n", |
| 396 | + "### Discrete Case\n", |
397 | 397 | "If $Z$ is discrete, then its distribution is called a *probability mass function*, which measures the probability $Z$ takes on the value $k$, denoted $P(Z=k)$. Note that the probability mass function completely describes the random variable $Z$, that is, if we know the mass function, we know how $Z$ should behave. There are popular probability mass functions that consistently appear: we will introduce them as needed, but let's introduce the first very useful probability mass function. We say $Z$ is *Poisson*-distributed if:\n",
|
398 | 398 | "\n",
|
399 | 399 | "$$P(Z = k) =\\frac{ \\lambda^k e^{-\\lambda} }{k!}, \\; \\; k=0,1,2, \\dots $$\n",
|
|
466 | 466 | "cell_type": "markdown",
|
467 | 467 | "metadata": {},
|
468 | 468 | "source": [
|
469 |
| - "###Continuous Case\n", |
| 469 | + "### Continuous Case\n", |
470 | 470 | "Instead of a probability mass function, a continuous random variable has a *probability density function*. This might seem like unnecessary nomenclature, but the density function and the mass function are very different creatures. An example of continuous random variable is a random variable with *exponential density*. The density function for an exponential random variable looks like this:\n",
|
471 | 471 | "\n",
|
472 | 472 | "$$f_Z(z | \\lambda) = \\lambda e^{-\\lambda z }, \\;\\; z\\ge 0$$\n",
|
|
521 | 521 | "metadata": {},
|
522 | 522 | "source": [
|
523 | 523 | "\n",
|
524 |
| - "###But what is $\\lambda \\;$?\n", |
| 524 | + "### But what is $\\lambda \\;$?\n", |
525 | 525 | "\n",
|
526 | 526 | "\n",
|
527 | 527 | "**This question is what motivates statistics**. In the real world, $\\lambda$ is hidden from us. We see only $Z$, and must go backwards to try and determine $\\lambda$. The problem is difficult because there is no one-to-one mapping from $Z$ to $\\lambda$. Many different methods have been created to solve the problem of estimating $\\lambda$, but since $\\lambda$ is never actually observed, no one can say for certain which method is best! \n",
|
|
841 | 841 | "cell_type": "markdown",
|
842 | 842 | "metadata": {},
|
843 | 843 | "source": [
|
844 |
| - "###Why would I want samples from the posterior, anyways?\n", |
| 844 | + "### Why would I want samples from the posterior, anyways?\n", |
845 | 845 | "\n",
|
846 | 846 | "\n",
|
847 | 847 | "We will deal with this question for the remainder of the book, and it is an understatement to say that it will lead us to some amazing results. For now, let's end this chapter with one more example.\n",
|
|
0 commit comments