-
Notifications
You must be signed in to change notification settings - Fork 0
/
Class Notes.tex
1453 lines (1011 loc) · 65.6 KB
/
Class Notes.tex
1
2
3
4
5
6
7
8
9
10
11
12
13
14
15
16
17
18
19
20
21
22
23
24
25
26
27
28
29
30
31
32
33
34
35
36
37
38
39
40
41
42
43
44
45
46
47
48
49
50
51
52
53
54
55
56
57
58
59
60
61
62
63
64
65
66
67
68
69
70
71
72
73
74
75
76
77
78
79
80
81
82
83
84
85
86
87
88
89
90
91
92
93
94
95
96
97
98
99
100
101
102
103
104
105
106
107
108
109
110
111
112
113
114
115
116
117
118
119
120
121
122
123
124
125
126
127
128
129
130
131
132
133
134
135
136
137
138
139
140
141
142
143
144
145
146
147
148
149
150
151
152
153
154
155
156
157
158
159
160
161
162
163
164
165
166
167
168
169
170
171
172
173
174
175
176
177
178
179
180
181
182
183
184
185
186
187
188
189
190
191
192
193
194
195
196
197
198
199
200
201
202
203
204
205
206
207
208
209
210
211
212
213
214
215
216
217
218
219
220
221
222
223
224
225
226
227
228
229
230
231
232
233
234
235
236
237
238
239
240
241
242
243
244
245
246
247
248
249
250
251
252
253
254
255
256
257
258
259
260
261
262
263
264
265
266
267
268
269
270
271
272
273
274
275
276
277
278
279
280
281
282
283
284
285
286
287
288
289
290
291
292
293
294
295
296
297
298
299
300
301
302
303
304
305
306
307
308
309
310
311
312
313
314
315
316
317
318
319
320
321
322
323
324
325
326
327
328
329
330
331
332
333
334
335
336
337
338
339
340
341
342
343
344
345
346
347
348
349
350
351
352
353
354
355
356
357
358
359
360
361
362
363
364
365
366
367
368
369
370
371
372
373
374
375
376
377
378
379
380
381
382
383
384
385
386
387
388
389
390
391
392
393
394
395
396
397
398
399
400
401
402
403
404
405
406
407
408
409
410
411
412
413
414
415
416
417
418
419
420
421
422
423
424
425
426
427
428
429
430
431
432
433
434
435
436
437
438
439
440
441
442
443
444
445
446
447
448
449
450
451
452
453
454
455
456
457
458
459
460
461
462
463
464
465
466
467
468
469
470
471
472
473
474
475
476
477
478
479
480
481
482
483
484
485
486
487
488
489
490
491
492
493
494
495
496
497
498
499
500
501
502
503
504
505
506
507
508
509
510
511
512
513
514
515
516
517
518
519
520
521
522
523
524
525
526
527
528
529
530
531
532
533
534
535
536
537
538
539
540
541
542
543
544
545
546
547
548
549
550
551
552
553
554
555
556
557
558
559
560
561
562
563
564
565
566
567
568
569
570
571
572
573
574
575
576
577
578
579
580
581
582
583
584
585
586
587
588
589
590
591
592
593
594
595
596
597
598
599
600
601
602
603
604
605
606
607
608
609
610
611
612
613
614
615
616
617
618
619
620
621
622
623
624
625
626
627
628
629
630
631
632
633
634
635
636
637
638
639
640
641
642
643
644
645
646
647
648
649
650
651
652
653
654
655
656
657
658
659
660
661
662
663
664
665
666
667
668
669
670
671
672
673
674
675
676
677
678
679
680
681
682
683
684
685
686
687
688
689
690
691
692
693
694
695
696
697
698
699
700
701
702
703
704
705
706
707
708
709
710
711
712
713
714
715
716
717
718
719
720
721
722
723
724
725
726
727
728
729
730
731
732
733
734
735
736
737
738
739
740
741
742
743
744
745
746
747
748
749
750
751
752
753
754
755
756
757
758
759
760
761
762
763
764
765
766
767
768
769
770
771
772
773
774
775
776
777
778
779
780
781
782
783
784
785
786
787
788
789
790
791
792
793
794
795
796
797
798
799
800
801
802
803
804
805
806
807
808
809
810
811
812
813
814
815
816
817
818
819
820
821
822
823
824
825
826
827
828
829
830
831
832
833
834
835
836
837
838
839
840
841
842
843
844
845
846
847
848
849
850
851
852
853
854
855
856
857
858
859
860
861
862
863
864
865
866
867
868
869
870
871
872
873
874
875
876
877
878
879
880
881
882
883
884
885
886
887
888
889
890
891
892
893
894
895
896
897
898
899
900
901
902
903
904
905
906
907
908
909
910
911
912
913
914
915
916
917
918
919
920
921
922
923
924
925
926
927
928
929
930
931
932
933
934
935
936
937
938
939
940
941
942
943
944
945
946
947
948
949
950
951
952
953
954
955
956
957
958
959
960
961
962
963
964
965
966
967
968
969
970
971
972
973
974
975
976
977
978
979
980
981
982
983
984
985
986
987
988
989
990
991
992
993
994
995
996
997
998
999
1000
\documentclass[english, course]{Notes}
\usepackage{bbm}
\title{ORF 526}
\subject{Probability}
\author{Emily Walters}
\email{ewalters@princeton.edu}
\speaker{Miklos Racz}
\date{12}{09}{2018}
\dateend{31}{12}{2018}
\place{Friend Center 008}
\begin{document}
\section{9/13/2018}
\subsection{LLN and CLT}
For a set of coin flip
\begin{theorem}{Weak Law of Large Numbers}
$\forall \epsilon \lim_{n \to \infty} \mathbb{P}(|\frac{S_n}{n} - \frac{1}{2}| \geq \epsilon) = 0$\\
\end{theorem}
\begin{theorem}{Strong LLN}
$\mathbb{P}(\lim_{n \to \infty} \frac{S_n}{n} = \frac{1}{2}) = 1$\\
\end{theorem}
\begin{theorem}{General Strong LLN}
If $\{x_i\}^{\infty}_{i = 1}$ are I.I.D. random variables such that $\mathbb{E}[|x_i|] < \infty$ then if $S_n =x_1 + \dots + x_n$, $\lim_{n \to \infty} \frac{S_n}{n} = \mathbb{E}[x_i] = \mu$.
\end{theorem}
\subsection{Exercise}
Come up with some $\{x_i\}^\infty_{i=1}$ such that (A) the law of large numbers does not hold and (B) the law of large numbers does hold but the central limit theorem does not.\\
Problem: Throw a (fair) die until you get a 6. What is the expected number of throws (including the one that gives us a 6) conditioned on the event that all throws give even numbers?
\section{9/18/2018}
\subsection{Problem from last week}
Problem: Throw a (fair) die until you get a 6. What is the expected number of throws (including the one that gives us a 6) conditioned on the event that all throws give even numbers?
P. Cuff: Let $T$ be the first time when the throw is not a $2$ or a $4$.\\
$T \sim Geo(\frac{2}{3})$, $\mathbb{E}[T] = \frac{3}{2}$. However, $T$ is independent of what the die roll actually is, so our solution is $\mathbb{E}[T \mid X_T = 6] = \mathbb{E}[T] = \frac{3}{2}$. This example demonstrates the importance of being careful when dealing with conditions.\\
\subsection{Probability and Measure Theory}
In 1933, Kolimogorov laid down the mathematical foundations of probability theory using measure theory. Measure theory allows us to unify the theory dealing with discrete and continuous probability cases, deals with mixtures of both discrete and continuous cases, and deals with cases that are neither discrete nor continuous. Measure theory allows us to work with much more exotic probility problems. Ex: Stochastic processes in path space.\\
\subsection{Measure Theory}
\begin{definition}
A collection $C$ of subsets of $E$ (the universe) is called an algebra if:\\
1: $\emptyset \in C$\\
2: $A \in C \implies A^c \in C$\\
3: $A, B \in C \implies A \cup B \in C$.\\
$C$ is a $\sigma$-algebra if, in addition to (1) and (2),\\
3': $A_1, A_2, \dots \in C \implies \bigcup^\infty_{i = 1}A_i \in C$.
\end{definition}
Examples:\\
1: $\mathcal{E} = \{\emptyset, E\}$ trivial $\sigma$-algebra\\
2: $\mathcal{E} = 2^E$ discrete $\sigma$-algebra
\subsection{Intersections and Unions of $\sigma$-Algebras}
\begin{itemize}
\item Any (countable or uncountable) intersection of $\sigma$-algebras is a $\sigma$-algebra.
\item The union of two $\sigma$-algebras is not necessarily a $\sigma$-algebra.\\
\end{itemize}
\begin{definition}{Generated $\sigma$-algebra}
Let $\mathcal{C}$ be a collection of subsets of $E$. Take all $\sigma$-algebras that contain $\mathcal{C}$. Take their intersection. This $\sigma$-algebra is called the $\sigma$-algebra generated by $\mathcal{C}$, and is denoted by $\sigma(\mathcal{C})$.
\end{definition}
\subsection{Topological Spaces, Borel $\sigma$-Algebras, Borel Sets}
\begin{definition}
If $E$ is a topological space, and $\mathcal{C}$ is the collection of all open sets of $E$, then $\sigma (\mathcal{C})$ is called the Borel $\sigma$-algebra. Its elements are called Borel sets. The Borel $\sigma$-algebra is denoted by $\mathcal{B}_E$ or $\mathcal{B}(E)$.
\end{definition}
\subsection{Measurable Spaces}
\begin{definition}
A pair $(E, \mathcal{E})$ is a measurable space if $\mathcal{E}$ is a $\sigma$-algebra on $E$. The sets in $\mathcal{E}$ are called measurable sets.
\end{definition}
\begin{definition}
Let $(E, \mathcal{E})$, $(F, \mathcal{F})$ are two measurable spaces. If $A \subset E$ and $B \subset F$ are measurable sets, then $A \times B$ is called a measurable rectangle.\\
\end{definition}
\begin{definition}
The product $(E \times F, \mathcal{E} \otimes \mathcal{F})$ where $\mathcal{E} \otimes \mathcal{F} = \sigma(\{A \times B \mid A \in \mathcal{E}$, $B \in \mathcal{F}\})$, is a measurable space.\\
\end{definition}
\begin{definition}
$\mu: \mathcal{E} \to \mathbb{R^+}$ is a measure on $(E, \mathcal{E})$ if
\begin{enumerate}
\item $\mu(\emptyset) = 0$
\item If $A_1, A_2, \dots \in \mathcal{E}$ are pairwise disjoint, then $\mu(\bigcup^\infty_{i = 1} A_i) = \sum^\infty_{i = 1}\mu(A_i)$
\end{enumerate}
\end{definition}
Property (2) is called countable additivity or $\sigma$-additivity.\\
\begin{definition}
A probability measure is a measure $\mu$ such that $\mu(E) = 1$.
\end{definition}
\begin{definition}
A probability space is a triple $(E, \mathcal{E}, \mu)$ such that $(E, \mathcal{E})$ is a measurable space, and $\mu$ is a probability measure.
\end{definition}
Probability spaces are often denoted $(\Omega, \mathcal{F}, \mathbb{P})$.
\section{9/20/2018}
\subsection{Probability Spaces (review)}
A probability space, written $(\Omega, \mathcal{F}, \mathbb{P})$, is composed of a sample space ($\Omega$), the events ($\mathcal{F}$, a $\sigma$-algebra on $\Omega$), and a probability measure ($\mathbb{P}$).
\subsection{Measures}
Suppose $(E, \mathcal{E})$ is a measurable space.\\
\begin{definition}
We say that $\mu: \mathbf{E} \to \overline{\mathbb{R}_+}$ ($\overline{\mathbb{R}_+}$ is the reals plus the point at positive infinity) such that
\begin{itemize}
\item $\mu(\emptyset) = 0$
\item If $A_1, A_2, \dots \in \mathcal{E}$ are pairwise disjoint then $\mu(\bigcup^\infty_{i = 1} A_i) = \sum^\infty_{i=1} \mu(A_i)$.
\end{itemize}
\end{definition}
Examples:
\begin{enumerate}
\item The Dirac Measure. $x \in E$, $\delta_x(A) =
\begin{cases}
1 & x \in A\\
0 & x \not \in A
\end{cases}$
\item The Counting Measure. $D \in E$, $\mu(A) = $\# of points in $A \cap D$. If $D$ is countable, then $\mu(A) = \sum_{x \in D} \delta_x(A)$.
\item Discrete Measure. $D \subset E$ countable, $m(x)$ is some real value for every $x \in D$. $\mu(A) = \sum_{x \in D} m(x)\delta_x(A)$.
\item The Uniform Measure on $\{1, 2, \dots, n\}$. The discrete measure with $m(x) = \frac{1}{n}$.
\item The Lebesgue Measure. $Leb(A) =$ length of $A$ where $A$ is an interval.
\end{enumerate}
\subsection{Properties of Measures}
Let $(E, \mathcal{E}, \mu)$ be a measure space.\\
\begin{enumerate}
\item Finite Additivity. $A \cap B = \emptyset \implies \mu (A \cup B) = \mu(A) + \mu(B)$.
\item Monotonicity. If $A \subseteq B$ then $\mu(A) \leq \mu(B)$. Note that this is clear because $B = A \cup (B \setminus A)$.
\item Sequential Continuity. If $A_n \subset A$ and $A_n$ converges to $A$ as $n \to \infty$ then $\mu(A_n)$ converges to $\mu(A)$ from below.
\item Boole's Inequality / Union Bound. $A_1, A_2, \dots \in \mathcal{E}$, $\mu(\bigcup^\infty_{i = 1} A_i) \leq \sum_{i = 1}^\infty \mu(A_i)$. Note: We can prove this by creating a sequence of disjoint subsets of $\bigcup_{i = 1}^\infty A_i$, then using sequential continuity.
\item If $c > 0$, then $c\mu$ is also a measure, $(c\mu)(A) = c \cdot \mu(A)$.
\item If $\mu_1$, $\mu_2$ are measures then $\mu_1 + \mu_2$ is a measure.
\end{enumerate}
\begin{definition}
If $\mu(E) < \infty$ then $\mu$ is called a finite measure.
\end{definition}
\begin{definition}
We say that a measure $\mu$ is $\sigma$-finite if there exists a measurable countable partition $\{E_n\}$ of $E$ such that $\mu(E_n) < \infty$ for all $n$. Ex: $Leb$ is $\sigma$-finite.
\end{definition}
\subsection{Specification of Measures}
\begin{theorem}
Let $(E, \mathcal{E})$ be a measurable space. Let $\mu$ and $\nu$ be two measures on $(E, \mathcal{E})$ with $\mu(E) = \nu(E) < \infty$. If $\mu$ and $\nu$ agree on a collection of subsets that is closed under intersections, that generate $\mathcal{E}$, then $\mu = \nu$.
\end{theorem}
\begin{corollary}
Cor: If $\mu$ and $\nu$ are two probability measures on $\mathbb{R}$ with the same cumulative distribution functions, then $\mu = \nu$.
\end{corollary}
\begin{definition}
The cumulative distribution at a point $x$ is $\mu([-\infty, x])$.
\end{definition}
Assume that $\{x\} \in \mathcal{E}$ if $x \in E$. This is true of all standard measurable spaces.\\
\begin{definition}
$x$ is an atom of $\mu$ if $\mu(\{x\}) > 0$.
\end{definition}
\begin{definition}
$\mu$ is purely atomic if $\exists D \subset E$ such that $\forall x \in D$, $\mu(\{x\}) > 0$ and $\mu(E \setminus D) = 0$.
\end{definition}
\begin{definition}
$\mu$ is diffuse if it has no atoms. Ex: $Leb$.
\end{definition}
\begin{lemma}
If $\mu$ is a $\sigma$-finite measure on $(E, \mathcal{E})$ then we can write $\mu = \lambda + \nu$ where $\lambda$ is diffuse and $\nu$ is purely atomic.
\end{lemma}
\subsection{Completeness and Negligible Sets}
\begin{definition}
A measurable set $A$ is negligible if $\mu(A) = 0$. An arbitrary subset of $E$ is negligible if it is contained in a measurable set that is negligible.
\end{definition}
\begin{definition}
A measure space is complete if every negligible set is measurable.
\end{definition}
\begin{lemma}
To make a measure space complete, take $\overline{\mathcal{E}} = \sigma(\mathcal{E} \cup \mathcal{N})$, where $\mathcal{N}$ is the collection of negligible sets. $\forall A \subset \overline{\mathcal{E}}$, $A = B \cup N$ with $B \in \mathcal{E}, N \in \mathcal{N}$. Define $\overline{\mu}(A) = \mu(B)$. This is called the completion of the measure space, $(E, \overline{\mathcal{E}}, \overline{\mu})$. In the case of $(\mathbb{R}, \mathcal{B}_{\mathbb{R}}, Leb)$, the elements of $\overline{\mathcal{B}}_\mathbb{R}$ are called Lebesgue-measurable.\\
\end{lemma}
\subsection{Functions}
Let $(\Omega, \mathcal{F}, \mathbb{P})$ be a measure space. $\Omega = \{1, 2, 3, 4, 5, 6\}$, $X =$ outcome mod 5. $X(\omega) = \omega \bmod 5$, $X: \{1, 2, 3, 4, 5, 6\} \to \{0, 1, 2, 3, 4\}$. $\mathbb{P}(X = 1) = \mathbb{P}(\{\omega \in \Omega \mid X(\omega) = 1\})$.\\
... Basic set theory stuff.\\
\begin{definition}
$(E, \mathcal{E})$ and $(F, \mathcal{F})$ are two measure spaces. $f: E \to F$ is measurable relative to $\mathcal{E}$ and $\mathcal{F}$ if $f^{-1}(A) \in \mathcal{E}$ for every $A \in \mathcal{F}$.
\end{definition}
\begin{theorem}
$(E, \mathcal{E})$, $(F, \mathcal{F})$ measurable spaces. $f: E \to F$ is measurable relative to $\mathcal{E}$ and $\mathcal{F}$ if and only if there exists a collection $\mathcal{F}_0$ of subsets of $F$ such that $f^{-1}(B) \in \mathcal{E}$ $\forall B \in \mathcal{F_0}$, and $\mathcal{F}_0$ generates $\mathcal{F}$.
\end{theorem}
\begin{proof}
Left as an exercise
\end{proof}
\begin{theorem}
Let $(E, \mathcal{E})$, $(F, \mathcal{F})$, $(G, \mathcal{G})$ be measure spaces. $f: E \to F$, $g: F \to G$. If $f$ and $g$ are measurable, then $g \circ f$ is measurable.
\end{theorem}
\section{9/25/2018}
\subsection{Measurable Functions}
Let $(E, \mathcal{E}, \mu)$ and $(F, \mathcal{F})$ be measure spaces. Let $f: E \to F$.\\
\begin{definition}
$f$ is measurable relative to $\mathcal{E}$ and $\mathcal{F}$ if $f^{-1}(B) \in \mathcal{E}$ $\forall B \in \mathcal{F}$.\\
\end{definition}
Generally, we will focus on measurable functions $f: E \to \mathbb{R}$ (Real Valued function), $f: E \to \overline{\R} = [-\infty, \infty]$ (Numerical function), or similar.\\
\begin{definition}
$f: E \to \R$ is $\mathcal{E}$-measurable if it is measurable relative to $\mathcal{E}$ and $\mathcal{B}_\R$.\\
\end{definition}
\begin{definition}
If $E$ is a topological space and $\mathcal{E}$ is the Borel $\sigma$-algebra, then we simply say that $f$ is a Borel function.
\end{definition}
\begin{lemma}
$f: E \to \R$ is $\mathcal{E}$-measurable, if and only if $f^{-1}((-\infty, r]) \in \mathcal{E}$ for all $r \in \R$.\\
\end{lemma}
\begin{proof}
From HW1: $\sigma(\{(-\infty, r] \mid r \in \R\}) = \mathcal{B}(\R)$. Then it follows from claim stated last time wrt the inverse of a generating set.\\
\end{proof}
\begin{definition}
$f^+:= \max\{f, 0\}$, $f^-:= -\min\{f, 0\}$. Note that $f = f^+ - f^-$.\\
\end{definition}
\begin{lemma}
$f$ is $\mathcal{E}$-measurable if and only if $f^+$ and $f^-$ are $\mathcal{E}$-measurable.
\end{lemma}
\begin{proof}
Left as an exercise
\end{proof}
\begin{definition}
An indicator function is of the form
\[\mathbbm{1}_A(x) =
\begin{cases}
1 & x \in A\\
0 & x \not \in A
\end{cases}\]
\end{definition}
Check: $\mathbbm{1}_A$ is $\mathcal{E}$-measurable if and only if $A \in \mathcal{E}$.\\
\begin{definition}
A function is simple if $f = \sum_{i = 1}^n a_i \mathbbm{1}_{A_i}$, $a_i \in \R$. Where $A_1, A_2, \dots A_n$ are $\mathcal{E}$-measurable.\\
\end{definition}
\begin{definition}
The canonical form of a simple function is $f = \sum_{j = 1}^m b_j \mathbbm{1}_{B_j}$ where $\{B_j\}$ is a partition of $\mathcal{E}$.\\
\end{definition}
\begin{fact}
Conversely, if a function is $\mathcal{E}$-measurable and takes only finitely many real values, then it is a simple function.
\end{fact}
\begin{fact}
If $f$ and $g$ are simple, then so are $f+g$, $f-g$, $fg$, $f/g$, $\max\{f, g\}$, $\min\{f, g\}$.\\
\end{fact}
\begin{theorem}
The class of measurable functions is closed under limits.\\
Let $\{f_n\}$ be a sequence of $\mathcal{E}$-measurable functions then $\inf f_n$, $\sup f_n$, $\liminf f_n$, and $\limsup f_n$, defined pointwise, are $\mathcal{E}$-measurable.
\end{theorem}
\begin{proof}
For $\sup f_n = f$, we want to show that $f^{-1}(-\infty, r] \in \mathcal{E}$. Since intersections can be rewritten as unions, and $f(x) \leq r \iff f_n(x) \leq r$ $\forall n$, we have
\[f^{-1}(-\infty, r] = \bigcap^\infty_{n=1} f^{-1}_n(-\infty, r]\]
But we know that $f^{-1}_n(-\infty, r] \in \mathcal{E}$ and since this is a countable intersection, $f^{-1}(-\infty, r] \in \mathcal{E}$.
\end{proof}
\subsection{Approximation of Measurable Functions}
let $f: \overline{\R}_+ \to \overline{\R}_+$. We can approximate $f$ by a sequence of simple functions by subdividing $\overline{\R}_+$ into a partition with partitions of length $\frac{1}{2^n}$. If $x \in A$, $f_n(x)$ is the lower bound of $f(A)$. (???)\\
Alternatively, let $d_n(x) = \sum_{k = 1}^{n2^n} \frac{k-1}{2^n} \mathbbm{1}_{[\frac{k-1}{2^n}, \frac{k}{2^n}]} + n \mathbbm{1}_{[n, \infty]}$. Then $f_n = d_n \circ f$.\\
\begin{theorem}
A function $f$ is $\mathcal{E}$-measurable if and only if it is the increasing limit of simple functions.\\
\end{theorem}
Note: Lookup monotone classes of functions.
\subsection{Integration}
Suppose $(E, \mathcal{E}, \mu)$ is a measure space. Define $f: E \to \R$. We want to find $\int f d_\mu$. That is, the integral of $f$ relative to the measure $\mu$. We denote this $\mu f = \mu(f) = \int f d_\mu = \int \mu(dx)f(x) = \int_E \mu(dx) f(x)$.\\
How we will do this is we will first define integrals over measure spaces for simple functions, then extend this definition by taking limits.\\
\begin{definition}
If $f$ is a simple function, $f = \sum_{i = 1}^n a_i \mathbbm{1}_{A_i}$, where $\{A_i\}$ is a partition of $E$ we define the integral as
\[\int f d\mu = \sum_{i=1}^n a_i \mu(A_i)\]
Now, suppose that $f$ is a measurable positive function, and let $f_n = d_n \circ f$. Then $\int f d\mu = \lim_{n \to \infty} f_n d\mu$.\\
Finally, if $f = f^+ - f^-$, then $\int f d\mu = \int f^+ d\mu - \int f^- d\mu$, provided that at least one of the two integrals on the right are finite. Otherwise, $\int f d\mu$ is undefined.
\end{definition}
\section{9/27/2018}
\subsection{The Compactification of $\R$}
$\mathcal{B}(\overline{\R}) = \sigma(\mathcal{B}(R) \cup \{+ \infty\} \cup \{-\infty\})$.\\
$\overline{\R}$ has the topology where $A \subset \overline{\R}$ is open if $A \setminus \{-\infty, +\infty\}$ is open in $\R$.\\
\subsection{Last Time}
For $(E, \mathcal{E}, \mu)$, $f: E \to \overline{\R}$, we defined $\mu(f) = \int f d\mu$.
\subsection{Continuing}
\begin{definition}
$f$ is integrable if $\int f d\mu$ exists and is finite.\\
\end{definition}
Note: $f$ is integrable $\iff$ $\int |f| d\mu < \infty$. Notice $|f| = f^+ + f^-$.\\
\begin{exercise}
Every integrable function is real-valued almost everywhere (a.e.).\\
\end{exercise}
\begin{definition}
A statement holds almost everywhere (for almost every $x \in E$) if it holds for all $x$ except for $x$ in a negligible set. Denoted $\mu$-a.e. or (a.e.). For probability measures, we say "almost surely".
\end{definition}
Properties of Integrals:\\
$a, b \in \R^+$, $f, g \in \mathcal{E}_+$ ($\mathcal{E}$-measurable positive functions).
\begin{enumerate}
\item Positivity: $\mu(f) \geq 0$. $\mu(f) = 0 \implies f = 0$ a.e.
\item Linearity: $\mu(af + bg) = a\mu(f) + b\mu(g)$.
\item Monotonicity: If $f \leq g$ a.e., then $\mu(f) \leq \mu(g)$.
\end{enumerate}
Monotone Convergence Theorem: If $f_n \to f$ from below, then $\mu(f_n) \to \mu(f)$ from below.\\
\begin{enumerate}
\item Dirac measure: using the Dirac delta $\delta_{x_0} (f) = f(x_0)$.\\
\item $\mu = \sum_{x \in D} m(x)\delta_x$, $D \subset E$, then $\mu(f) = \sum_{x \in D} m(x) f(x)$. Note that if $E$ is countable, then every measure is of this form ($m(x) = \mu(\{x\})$).
\end{enumerate}
Note that if $E$ is a vector space, we can think of $\mu(f)$ as the inner product $\langle \mu, f \rangle$.\\
\subsection{Lebesgue Integration}
$E \subset \R^d$ a Borel set; $\mathcal{E} = \mathcal{B}(E)$. $Leb_{E} :=$ restriction of $Leb$ to $(E, \mathcal{E})$.\\
$Leb_E(f) = \int_E Leb_E(dx)f(x) = \int_E dxf(x) = \int_E f(x)dx$\\
If the Reiman integral of $f$ exists, then $Leb$ integral of $f$ does as well and things are equal. However, the converse is false. Notice that if $E = [0, 1]$, $f = \mathbbm{1}_\Q$, then $Leb(f) = 0$.\\
\subsection{Integration Over a Set}
$A \subset E$, $A \in E$, $f \in \mathcal{E}$. Then $f \mathbbm{1}_A \in \mathcal{E}$ and so $\mu(f \mathbbm{1}_A) = \int f \mathbbm{1}_A d\mu = \int_A f d\mu$.
\subsection{Monotone Convergence Theorem}
\begin{theorem}
Let $\{f_n\}$ be a monotone increasing sequence of measurable positive functions. Then $\mu(\lim_{n \to \infty} f_n) = \lim_{n \to \infty} \mu(f_n)$.
\end{theorem}
\begin{proof}
$f := \lim f_n$ is well defined, so $\mu(f)$ is well defined. For all $n$, $f_n \leq f$, so $\mu(f_n) \leq \mu(f)$ and $\lim_n \mu(f_n) \leq \mu(f)$.\\
For the other direction, we want to show that $\lim_{n \to \infty} \mu(f_n) \geq \mu(d_k \circ f)$ $\forall k$.\\
\end{proof}
\subsection{The Insensitivity of the Integral W.R.T Negligible Sets}
\begin{lemma}
If $A \in \mathcal{E}$ is negligible, then $\int_A f d\mu = 0$ for all measurable $f$.\\
If $f = g$ a.e., then $\mu(f) = \mu(g)$.\\
If $f \in \mathcal{E}_+$, $\mu(f) = 0$ then $f = 0$ a.e.
\end{lemma}
\subsection{Fatou's Lemma}
\begin{theorem}{Fatou's Lemma}
Let $(f_n)_{n \geq 1}$ be a sequence of functions in $\mathcal{E}_+$. Then $\mu(\liminf f_n) \leq \liminf \mu(f_n)$. This follows from MCT (HW).
\end{theorem}
\subsection{Dominated Convergence Theorem}
\begin{theorem}{Dominated Convergence Theorem}
If $f_n$ is a sequence of functions and there exists a function $g$ such that (a) $|f_n| \leq g$ $\forall n \geq $, and (b) $g$ is integrable, then $f:= \lim f_n$ (if it exists) is integrable and $\mu(f) = \lim \mu(f_n)$. This follows from Faton's (HW).\\
Terminology: $g$ dominates $f_n$ for every $n$.
\end{theorem}
\begin{corollary}{Bounded Convergence Theorem}
Suppose that $\mu$ is a finite measure, and $|f_n| \leq c < \infty$ ($c$ a constant), and $f:= \lim f_n$ exists. Then $\mu(f) = \lim \mu(f_n)$.
\end{corollary}
\subsection{A note on these theorems...}
For the majority of these theorems, there exists an always everywhere version.\\
\subsection{Characterization of the Integral}
$f \mapsto \mu(f)$, maps from $\mathcal{E}_+$ into $\overline{\R}_+$.\\
\begin{theorem}
Let $(E, \mathcal{E})$ be a measurable space. $L: \mathcal{E}_+ \to \overline{\R}_+$. Then there exists a unique measure $\mu$ on $(E, \mathcal{E})$ such that $L(f) = \mu(f)$ if and only if
\begin{itemize}
\item $f = 0 \implies L(f) = 0$.
\item $L(af + bg) = aL(f) + bL(g)$.
\item If $f_n \to f$ from below, then $L(f_n) \to L(f)$ from below.
\end{itemize}
\end{theorem}
\subsection{Product Spaces}
\begin{itemize}
\item $(E, \mathcal{E})$, $(F, \mathcal{F})$: $(E \times F, \mathcal{E} \otimes \mathcal{F})$.
\item $(E, \mathcal{E}, \mu)$, $(F, \mathcal{F}, \nu)$: $(E \times F, \mathcal{E} \otimes \mathcal{F}, \mu \times \nu)$.
\item $(\mu \times nu)(A \times B) = \mu(A) \times \nu(B)$.
\end{itemize}
\begin{theorem}
Fubini: Suppose that $f: E \times F \to \overline{\R}$ such that $\int \int_{E \times F} |f| d(\mu \times \nu) < \infty$. Then $\int \int_{E \times F} f d(\mu \times \nu) = \int_F (\int_E f(x, y) \mu(dx)) \nu(dy) = \int_E (\int_F f(x, y) \nu(dy)) \mu(DX)$.\\
\end{theorem}
\begin{theorem}
Tonelli: If $f \geq 0$ then the same conclusions hold.
\end{theorem}
\subsection{Absolute Continuity of Measures}
\begin{definition}
$(E, \mathcal{E})$ with measures $\mu$ and $\nu$. We say that $\mu$ is absolutely continuous with regard to $\nu$ if $\forall A \in \mathcal{E}$, $\nu(A) = 0 \implies \mu(A) = 0$. Denote this by $\mu << \nu$.
\end{definition}
Example: If a measure on $\R$ has a density (e.g., $\mu(dx) = \frac{1}{\sqrt{2\pi}} e^{-\frac{x^2}{2}} dx$, the standard Gaussian measure), then it is absolutely continuous with regard to the Lebasgue measure.\\
Example: Discrete distributions with the same support.\\
\begin{theorem}
Suppose that $\mu$ is $\sigma$-finite, and that $\nu << \mu$. Then there exists a positive $\mathcal{E}$-measurable function $p$ such that $\int_E \nu(dx) f(x) = \int_E \mu(dx) p(x)f(x)$, $\forall f \in \mathcal{E}_+$.\\
Moreover, $p$ is unique up to equivalence (if this holds for $p'$, then $p = p'$ a.e.).\\
\end{theorem}
\begin{definition}
This function $p$ is called the Radon-Nikodym derivative of $\nu$ with regard to $\mu$. We write this as $p(x) = \frac{\nu(dx)}{\mu(dx)}(x)$, or $p = \frac{d\nu}{d\mu}$.\\
\end{definition}
If we care about $\nu$, but it is difficult to use. If $\nu << \mu$, then we can perform calculations using the nicer $\mu$.\\
\begin{definition}
$\mu$ is singular with regard to $\mu$ if there exists some set $D \in \mathcal{E}$ such that $\mu(D) = 0$ and $\nu(E \setminus D) = 0$.
\end{definition}
\section{10/2/2018}
\subsection{Products of Measure Spaces}
A product of measure spaces
\[\bigotimes_{i = 1}^n (E_i, \mathcal{E}_i, \mu_i)\]
can be seen as $n$ mutually independent random variables (i.e. $n$ coin tosses).\\
How do we define a countably infinite product of measure spaces ($\bigotimes_{i = 1}^\infty (E_i, \mathcal{E}_i, \mu_i)$)?\\
Let $\mathcal{R}$ be the collection of all finite dimension measurable rectangles. That is, all sets of the form $\{ x \mid x_1 \in B_1, \dots, x_n \in B_n, x_{n + 1} \in \R, x_{n + 2} \in \R, \dots \}$ where $n \in \N$ and $B_i \in \mathcal{B}(\R)$. Then $\mathcal{B}_C = \sigma(\mathcal{R})$. We define the measure as
\[\mu(\{x \mid x_1 \in B_1, \dots, x_n \in B_n, \dots\}) = \mu_1(B_1)\mu_2(B_2)\dots\mu_n(B_n)\]
\begin{theorem}
Kolmogarov's Extension Theorem: Suppose $\{mu_n\}_{n \geq 1}$ is a sequence of probability measures, where $\mu_n$ is a probability measure on $(\R^n, \mathcal{B}_{\R^n})$ that is consistent. That is
\[\mu_{n + 1}(\{x_1 \in B_1, x_2 \in B_2, \dots, x_n \in B_n, x_{n+1} \in \R^n\}) = \mu_n(\{x_1 \in B_1, x_2 \in B_2, \dots, x_n \in B_n \})\]
For all $n \in \N$ and all $B_1, B_2, \dots, B_n \in \mathcal{B}(\R)$. Then there exists a unique probability measure $\mathbb{P}$ on $(\R^\N, \mathcal{B}_C)$ such that $\mathbb{P}(\{w \mid w_1 \in B_1, \dots, w_n \in B_n\}) = \mu_n(B_1 \times B_2 \times \dots \times B_n)$.
\end{theorem}
\subsection{Probability}
$(\Omega, \mathcal{F}, \mathbb{P})$ a probability space. $X: \Omega \to \R$ a random variable. A distribution of $X$ is the probability measure on $(\R, \mathcal{B}(\R))$.\\
Note that $\mu(A) = \mathbb{P}(X \in A) = \mathbb{P}(\{\omega \in \Omega \mid X(\omega) \in A \}) = \mathbb{P}(X^{-1}A)$.\\
Reminder: Refresh yourself on the definitions of the common probability distributions
\begin{itemize}
\item Binomial
\item Geometric
\item Poisson
\item Exponential
\item Gaussian
\end{itemize}
\subsection{Expected Value}
\begin{definition}
$\mathbb{E}[X] = \int_\R x \mu(dx) = \int_\omega X(\omega) \mathbb{P}(d\omega)$
\end{definition}
\subsection{Weak Law of Large Numbers}
\begin{theorem}
Suppose that $X_1, X_2, \dots$ are i.i.d. random variables, and that $\mathbb{E}[|X_n|] < \infty$. Let $m = \mathbb{X_1}$. Then $\lim(n \to \infty)\mathbb{P}\Big(|\frac{X_1 + \dots + X_n}{n} - m| \geq \epsilon \Big) = 0$.
\end{theorem}
\subsection{Markov's Inequality}
\begin{theorem}
Let $X$ be a non-negative random variable ($X: \Omega \to \R_+$), and let $\lambda > 0$. Then $\mathbb{P}(X \geq \lambda) \leq \frac{\mathbb{E}[X]}{\lambda}$.
\end{theorem}
\begin{proof}
$\mathbb{P}(X \geq \lambda) = E[\mathbbm{1}_{\{x \geq \lambda\}}] \leq \mathbb{E}[\frac{X}{\lambda}]$.
\end{proof}
\subsection{Chebyshev's Inequality}
\begin{theorem}
Let $X$ be a random variable, such that $\mathbb{E}[X^2] < \infty$. Then
\[\mathbb{P}(|X - \mathbb{E}[X]| \geq \lambda)\leq \frac{Var(X)}{\lambda^2} \]
where $Var(X) = \mathbb{E}[(X - \mathbb{E}[X]^2)]$.
\end{theorem}
\begin{proof}
$\mathbb{P}(|X - \mathbb{E}[X]| \geq \lambda) = \mathbb{P}(|X - \mathbb{E}[X]^2| \geq \lambda^2)$. Now apply Markov's Inequality.
\end{proof}
\subsection{General Markov}
\begin{theorem}
Let $X$ be a random variable and $f: \R \to \R_+$ an increasing function.\\
Then $\mathbb{P}(X \geq \lambda) = \mathbb{P}(f(X) \geq f(\lambda)) \leq \frac{\mathbb{E}[f(X)]}{f(\lambda)}$.\\
\end{theorem}
\subsection{Chernoff Bound}
\begin{theorem}
Suppose $X_1, X_2, \dots, X_n$ independent Bernoulli random variables, with $\mathbb{E}[X_i] = p_i$. Let $S_n = \sum_{i = 1}^n X_n$, and $\mu = \sum_{n = 1}^n p_i$. Then
\[\mathbb{P}(S_n \geq \mu + \lambda) \leq e^{-\frac{2\lambda^2}{n}}\]
and\[
\mathbb{P}(S_n \leq \mu - \lambda) \leq e^{-\frac{2\lambda^2}{n}}\]
In general, Chernoff provides a much tighter bound than Chebyshev.\\
\end{theorem}
\begin{proof}
Homework.
\end{proof}
\subsection{Almost Sure Convergence}
$Y_1, Y_2, \dots$ on $(\Omega, \mathcal{F}, \mathbb{P})$.\\
\begin{definition}
$\lim_{n \to \infty} Y_n = 0$ almost everywhere $\iff \mathbb{P}(\{\omega \mid \lim_{n \to \infty} Y_n(\omega) = 0 \}) = 1$.
\end{definition}
\subsection{Strong Law of Large Numbers}
\begin{theorem}
Under the same conditions as before, ($X_1, X_2, \dots$ I.I.D., $\mathbb{E}[|X_1|] < \infty$, $m = \mathbb{E}[X_1]$) we have that $\frac{S_n}{n} \to m$ almost everywhere as $n \to \infty$.
\end{theorem}
\section{10/4/18}
\subsection{Strong Law of Large Number}
\begin{theorem}
Let $X_1, X_2, \dots$ be I.I.D. random variables with $\mathcal{E}[X_1] < \infty$ and $m = \mathcal{E}[X_1]$. Then $\frac{1}{n}\sum_{i=1}^n X_i \to m$ almost everywhere.\\
In other words
\[\mathbb{P}(\{\omega \mid \lim_{n \to \infty} \sum_{i = 1}^n X_i(\omega) = m\}) = 1\]
Equivalently $\frac{1}{n} \sum_{i = 1}^n(X_i - m) \to 0$ almost everywhere. Without loss of generality, assume $m=0$.
\end{theorem}
\subsection{Borel-Cantelli Lemmas (Sufficient conditions for Almost Everywhere)}
Let $(\Omega, \mathcal{F}, \mathbb{P})$ be a probability space and $A_i \in \mathcal{F}$ for $i \geq 1$. Define the following event
\[\limsup A_n = \bigcap^\infty_{j = 1} \bigcup^\infty_{i=j} A_i = \{\omega \in \Omega \mid \omega \in A_n \text{ for infinitely many } n \}\]
\[\liminf A_n = \bigcup^\infty_{j = 1} \bigcap^\infty{i = j} A_i = \{\omega \in \Omega \mid \omega \in A_n \text{ for all but finitely many } n\}\]
We can think of unions and intersections of sets representing event as follows:
\[\bigcap \iff \forall \mid \bigcup \iff \exists\]
This terminology comes from $\limsup_{n \to \infty} \mathbbm{1}_{A_k}(\omega) = \mathbbm{1}_{\limsup A_n}(\omega)$.\\
Fattou: $\mathbb{P}(\liminf A_n) \leq \liminf \mathbb{P}(A_n) \leq \limsup \mathbb{P}(A_n) \leq \mathbb{P}(\limsup A_n)$.\\
\begin{lemma}
Borel-Cantelli I: If $\sum_{n = 1}^\infty \mathbb{P}(A_n) < \infty$, then $\mathbb{P}(\limsup A_n) = 0$. That is, almost surely only finitely many of the events happen.
\end{lemma}
\begin{proof}
Let $\epsilon > 0$ be arbitrary. Since $\sum_{n = 1}^\infty \mathbb{P}(A_n) < \infty$, there exists some $N_\epsilon$ such that $\sum_{n = N_\epsilon}^\infty \mathbb{P}(A_n) \leq \epsilon$. Suppose $\sum_{n = N}^\infty \mathbb{P}(A_n) \leq \epsilon$. Then we have
\[0 \leq \mathbb{P}(\limsup A_n) = \mathbb{P}(\bigcap_{j=1}^\infty \bigcup_{i = j}^\infty A_i) \leq \mathbb{P} (\bigcup_{i = N}^\infty A_i) \leq \sum_{i = N}^\infty \mathbb{P}(A_i) \leq \epsilon\]
\end{proof}
\begin{lemma}
Borel-Cantelli II: If $\sum_{n=1}^\infty \mathbb{P}(A_n) = +\infty$ and $A_n$ are mutually independent then $\mathbb{P}(\limsup A_n) = 1$.\\
That is, almost surely infinitely many of $A_n$ will occur.
\end{lemma}
\begin{proof}
Want to show $\mathbb{P}(\limsup A_n) = 1$. That is equivalent to saying that $\mathbb{P}((\limsup A_n)^C) = 0$. Now,
\[(\limsup A_n)^C = (\bigcap_{j = 1}^\infty \bigcup_{i = j}^\infty A_i)^C = \bigcup_{j = 1}^\infty \bigcap_{i = j}^\infty A_i^c = \liminf A_n^c\]
Need to show that $\mathbb{P}(\bigcap_{i = j}^\infty A_i^C = 0)$. Fix a large number $M$, then $\mathbb{P}(\bigcap_{i=j}^\infty A_i^C) \leq \mathbb{P}(\bigcap_{i = j}^M A_i^C) = \prod_{i = j}^M \mathbb{P}(A_i^C) = \prod_{i = j}^M(1 - \mathbb{P}(A_i)) \leq \prod_{i = j}^M e^{-\mathbb{P}(A_i)} = e^{-\sum_{i = j}^M \mathbb{P}(A_i)}$. Now let $M \to \infty$ and it goes to $0$.\\
\end{proof}
\subsection{Proof of SLLN}
\begin{theorem}
Let $X_1, X_2, \dots$ be I.I.D. random variables with $\mathcal{E}[|X_1|] < \infty$ and $\mathcal{E}[X_1] = 0$. Then $\frac{1}{n} \sum_{i = 1}X_i \to 0$ almost everywhere.
\end{theorem}
Step 1: Kolmogorov's inequality\\
Step 2: Apply inequality to show that the summable variances imply a.s. convergence.\\
Step 3: Kronecker's lemma and SLLN with summable variances condition\\
Step 4: Full SLLN proof using truncation argument\\
\begin{theorem}
Kolmogorov's Inequality: Let $X_1, X_2, \dots, X_n$ be mutually independent. Assume $\mathbb{E}[X_i] = 0$ and $\sigma_i^2 = \mathbb{E}[X_i^2] < \infty$. Then for any $\lambda > 0$ we have $\mathbb{P}(\max_{1 \leq i \leq n} |X_1 + X_2 + \dots + X_n| \geq \lambda) \leq \frac{\sum_{i = 1}^n \sigma_i^2}{\lambda^2}$. Note: this is a strengthening of Chebyshev.
\end{theorem}
\begin{proof}
Let $S_k = X_1 + X_2 + \dots + X_k$. Define $A = \{\omega \mid \max_{1 \leq i \leq n} |S_i| \geq \lambda\}$. Define $A_k = \omega \mid \max_{1 \leq i \leq k-1} |S_i| < \lambda, |S_k| \geq \lambda$.\\
Notice that $A = \bigcup_{k = 1}^n A_k$, $\mathbbm{1}_A = \sum_{k=1}^n \mathbbm{1}_{A_k}$, $A_k \cap A_l = \emptyset$ for $k \neq l$.\\
\[\mathbb{P}(A) = \mathbb{E}\mathbbm{1}_A = \sum_{k = 1}^n \mathbb{E}\mathbbm{1}_{A_k} \leq \sum_{k = 1}^n \mathbb{E}[\frac{S_k^2}{\lambda^2} \mathbbm{1}_{A_k}] \leq \frac{1}{\lambda^2} \sum_{k=1}^n \mathbb{E}[S_k^2 \mathbbm{1}_{A_k}] + \mathbb{E}[(S_n - S_k)^2 \mathbbm{1}A_k]\]
\end{proof}
\section{10/16/2018}
\subsection{Last Time}
Last time we proved the central limit theorem.
\subsection{Characteristic Functions}
\begin{definition}
\[\Phi_X(t) := \mathbb{E}[e^{itX}] = \mathbb{E}[\cos(tX)] + i \mathbb{E}[\sin(tX)]\]
Is the characteristic function of $X$, with $X$ a random variable.\\
\end{definition}
Properties:
\begin{enumerate}
\item $\Phi_X(0) = 1$
\item $\Phi_{-X}(t) = \Phi_X(-t) = \overline{\Phi_X(t)}$
\item $|\Phi_X(t)| \leq 1$
\item $t \mapsto \Phi_X(t)$ is uniformly continuous on $\R$.
\item $\Phi_X$ is positive definite. That is $\forall n$ and $\forall t_1, t_2, \dots, t_n \in \R$, the matrix $\{M_{i, j} = \Phi_X(t_i - t_j)\}_{i, j = 1}^n$ is positive definite. That is, $\forall z_1, z_2, \dots, z_n \in \C$, $\sum_{i, j = 1}^n z_i \Phi_X(t_i - t_j)\overline{z_j} \geq 0$.
\end{enumerate}
Note: The distribution of $X$ is symmetric around $0$ if and only if $\Phi_X$ is real.\\
\begin{theorem}
Bochner's Theorem: Let $\Phi: \R \to \C$. Suppose that
\begin{itemize}
\item $\Phi(0) = 1$.
\item $t \mapsto \Phi(t)$ is continuous at $t = 0$.
\item $\Phi$ is positive definite.
\end{itemize}
Then $\Phi$ is the characteristic function of some random variable $X$. In other words, there exists a CDF $F$ such that $\Phi(t) = \int_{-\infty}^\infty e^{itx}dF(x)$.\\
\end{theorem}
Further Properties:
\begin{itemize}
\item $\Phi_{aX + b}(t) = \mathbb{E}[e^{it(aX + b)}] = e^{ibt} \Phi_X(at)$.
\item If $X_1$ and $X_2$ are independent, then $\Phi_{X_1 + X_2}(t) = \Phi_{X_1}(t) + \Phi_{X_2}(t)$.
\item If $F' = f$ ($F$ a CDF), then $\Phi = \hat{f}$, the Fourier transform. That is, in this case, $\Phi(t) = \int_{-\infty}^\infty e^{itx}f(x)dx$.
\end{itemize}
\subsection{Characteristic Functions and Moments}
\begin{theorem}
Suppose that $\mathbb{E}[|X|^k] < \infty$. Then $\Phi_X \in \C^k$ and $\Phi_X^{(k)}(t) = \mathbb{E}[(iX)^k e^{itx}]$.
\end{theorem}
Note: $\exists$ a random variable $X$ such that $\mathbb{E} |X| = \infty$, but $\Phi_X$ is differentiable at $t = 0$.\\
\begin{claim}
Taylor Approximation Around $t=0$: Assume that $\mathbb{E}[|X|^m] < \infty$. Then $\Phi_X(t) = \sum_{k = 0}^m \mathbb{E}[X^k] \cdot \frac{(it)^k}{k!} + o(t^m)$.\\
\end{claim}
\begin{claim}
Analyticity: Suppose that $\mathbb{E}[|X|^k] < \infty$ $\forall k$ and $R^{-1} := \limsup_{m \to \infty}(\frac{|\mathbb{E}[X^m]|}{m!})^\frac{1}{m} < \infty$, then $\Phi$ extends analytically to the strip $\{t + is \mid |s| < R\}$.
\end{claim}
Example: $\mathcal{N}(0, 1)$, $R = \infty$.\\
\subsection{Examples of Characteristic Functions}
\begin{itemize}
\item $X \sim Bernoulli(p)$ : $\Phi_X(t) = pe^{it} + (1-p)$.
\item $X \sim Bin(n, p)$ : $\Phi_X(t) = (pe^{it} + (1 - p))^n$.
\item $X \sim Radenmacher$ : $\Phi_X(t) = \cos(t)$.
\item $X \sim Uni(a, b)$ : $\Phi_X(t) = \frac{e^{itb} - e^{ita}}{it(b-a)}$.
\item $X \sim \mathcal{N}(m, \sigma^2)$ : $\Phi_X(t) = e^{itm - \frac{\sigma^2 t^2}{2}}$.
\item $X \sim Exp(\lambda)$ : $\Phi_X(t) = \frac{\lambda}{\lambda - it}$.
\item $X \sim Cauchy$ : $\Phi_X(t) = e^{-|t|}$.
\end{itemize}
Want to compute $\Phi_X(t)$ with $X \sim \mathcal{N}(0, 1)$. Let $f_X$ be the density function of the standard normal distribution $X$. We know $\Phi'_X(t) = \mathbb{E}[iX e^{itX}] = \int_{\R} - x\sin(tx) f_X(x)dx = \int_{\R} \sin(tx) f'_X(x)dx = -t\int_{\R} \cos(tx)f_X(x)dx = -t \Phi_X(t)$. So,
\[\Phi'_X(t) = -t\Phi_X(t); \Phi_X(0) = 1\]
This ODE problem has a unique solution, $\Phi_X(t) = e^{-\frac{t^2}{2}}$.
\subsection{Inversions}
\begin{theorem}
Levy's Inversion Theorem: Suppose that $X$ has CDF $F_X$ and characteristic function $\Phi_X$. For every real numbers $a < b$ and $t$. Let
\[\Psi_{a, b}(t) := \frac{1}{2\pi} \int_a^b e^{-itu} du = \frac{e^{-tb} - e^{-ita}}{-i2\pi t}\]
Then $\lim_{T \to \infty} \int_{-T}^T \Psi_{a, b}(t) \Phi_X(t)dt = \frac{1}{2}[F_X(b) + F_X(b-)] - \frac{1}{2}[F_X(a) + F_X(a-)]$.\\
In particular, if $a$ and $b$ are continuity points of $F_X$, then the limit is $F_X(b) - F_X(a)$.\\
Furthermore, if $\int_{\R} |\Phi_X(t)|dt < \infty$ then $X$ has the following bounded and continuous probability density function:
\[f_X(x) = \frac{1}{2\pi} \int_{-\infty}^\infty e^{-itx} \Phi_X(t)dt\]
\end{theorem}
This is a special case of the standard Fourier inversion. The formula here is an integrated version which holds also in the absence of a density.\\
\begin{corollary}
If $\Phi_X(t) = \Phi_Y(t)$ $\forall t \in \R$, then $X$ and $Y$ have the same distribution.
\end{corollary}
\begin{proof}
Proof left as an exercise.
\end{proof}
\section{10/18/2018}
\subsection{Last Time}
The "Law" of $X$, written $\mathcal{L}(X)$, is in one to one correspondence with $\Phi_X$.\\
\subsection{Levy's Continuity Theorem}
\begin{theorem}
Let $\{F_n\}_{n \geq 1}$ be a sequence of CDFs on $\R$ and let $\{\Phi_n\}_{n \geq 1}$ be the corresponding characteristic functions.\\
\begin{itemize}
\item If $F_n \to F$ then $\Phi_n(t) \to \Phi(t)$ for all $t \in \R$ (pointwise).
\item Suppose that for every $t \in \R$, the limit $\lim_{n \to \infty} \Phi_n(t)$ exists and denote it by $\Phi(t)$. Suppose that $\Phi$ is continuous at $t = 0$. Then $\exists$ CDF $F$ such that $\Phi(t) = \int e^{itx} dF(x)$ and $F_n \to F$.
\end{itemize}
\end{theorem}
\subsection{Central Limit Theorem Proof}
\begin{theorem}
Suppose $X_1, X_2, \dots$ are I.I.D. such that $\mathbb{E}[X_1^2] < \infty$. Assume $\mathbb{E}[X_1] = 0$ and $\mathbb{E}[X_1^2] = 1$. Let $S_n = X_1 + \dots + X_n$. Then $\frac{S_n}{\sqrt{n}} \to \mathcal{N}(0, 1)$.\\
\end{theorem}
\begin{proof}
$\Phi_{X_1}(t) = \mathbb{E}[e^{itX_1}]$. As $t \to 0$,
\[\Phi_{X_1}(t) = 1 + \mathbb{E}[X_1] \frac{it}{1!} + \mathbb{E}[X_1^2] \frac{(it)^2}{2!} + o(t^2) = 1 - \frac{t^2}{2 + o(t^2)}\]
where $f(t) = o(t^2)$ if $\lim_{t \to 0} \frac{f(t)}{t^2}=0$. Now,
\begin{align*}
&\Phi_{\frac{S_n}{\sqrt{n}}}\\
=& \mathbb{E}[e^{it(\frac{S_n}{\sqrt{n}})}]\\
=& \mathbb{E}[e^{(\frac{it}{\sqrt{n}})(X_1 + \dots + X_n)}]\\
=& \mathbb{E}[e^{(\frac{it}{\sqrt{n}})X_1}] \cdot \dots \cdot \mathbb{E}[e^{(\frac{it}{\sqrt{n}})X_n}]\\
=&(\Phi_{X_1}(\frac{t}{\sqrt{n}}))^n\\
=& (1 - \frac{(\frac{t}{\sqrt{n}})^2}{2} + o((\frac{t}{\sqrt{n}})^2))^n\\
=& (1 - \frac{\frac{t^2}{1}}{2} + o(\frac{t^2}{2}))^n\\
\to & e^{-\frac{t^2}{2}}
\end{align*}
\end{proof}
\subsection{Some Comments on Weak Convergence}
\begin{definition}
A sequence of probability measures $\{\mu_n\}_{n \geq 1}$ on $\R$ is tight if $\forall \epsilon > 0$, $\exists K < \infty$ such that $\forall n \geq 1$, $\mu_n([-K, K]) > 1 - \epsilon$. Intuitively, mass doesn't "disappear to infinity" if $\mu_n = \mu$.
\end{definition}
Counterexamples:
\begin{itemize}
\item $\mu_n = \delta_n$
\item $\mu_n$ : $\mathcal{N}(0, \sigma_n^2 = n)$
\item $Uni(-n, n)$.
\end{itemize}
\begin{definition}
Let $\{\mu_n\}_{n \geq 1}$ be a probability measure on a complete metric space $S$. This sequence is called "tight" if $\forall \epsilon > 0$, $\exists K \subset S$ such that $\forall n \geq 1$, $\mu_n(K) > 1 - \epsilon$.
\end{definition}
\begin{claim}
If $F_n \to F$, then $\{F_n\}_{n \geq 1}$ is tight.
\end{claim}
\begin{proof}
Proof left as an exercise.\\
\end{proof}
\begin{theorem}
Helly: If $\{F_n\}_{n \geq 1}$ are tight, then $\exists \{n_k\}_{k \geq 1}$ (subsequence) and a CDF $F$ such that $F_{n_k} \to F$.\\
\end{theorem}
\begin{theorem}
Prohovor: Let $S$ be a complete separable metric space. If $\{\mu_n\}_{n \geq 1}$ are tight, then $\exists$ a weakly convergent subsequence $\{\mu_{n_k}\}_{k \geq 1}$.\\
\end{theorem}
\subsection{General Strategy for Proving Weak Convergence (Prohorov)}
\begin{enumerate}
\item Show that $\{\mu_n\}_{n \geq 1}$ is tight.
\item Identify the limit $\mu$.
\item Show uniqueness of the limit.
\end{enumerate}
\subsection{Stochastic Processes}
\begin{itemize}
\item Index set $\mathbb{T}$ representing time (EX: $\R$, $\R_+$, $[a, b]$, $\Z$, $\Z_+$, $\dots$)
\item State space $S$ (locally compact complete metric space)
\end{itemize}
\[t \mapsto X_t; X: \mathbb{T} \times \Omega \to S\]
Where $\forall t \in \mathbb{T}; X(t, \cdot): \Omega \to S$ is measurable.
\begin{enumerate}
\item $t \in \mathbb{T}$, $X_t: \Omega \to S$ (marginal)
\item $\omega \in \Omega$ fixed, $X(\cdot, \omega): \mathbb{T} \to S$ a (random) function.
\item $X: \Omega \to S^{\mathbb{T}} = \{\text{functions } \mathbb{T} \to S\}$. A random variable taking values in function space.
\end{enumerate}
Examples:
\begin{itemize}
\item $\{X_n\}_{n \geq 1}$ I.I.D.
\item $S_n = \sum_{i = 1}^n Y_i$ where $\{Y_i\}_{n \geq 1}$ are I.I.D.
\item Random walks on a graph.
\end{itemize}
\subsection{Markov Processes and Markov Chains}
\begin{definition}
Markov Process: A stochastic process where the distribution of the future given the past and present only depends on the present.\\
Discrete Time and Discrete Space: In this case, Markov Processes are called Markov Chains.\\
\end{definition}
\begin{definition}
$X_0, X_1, \dots$ is a Markov Chain on a discrete state space $S$ if $\mathbb{P}(X_n = x_n \mid X_{n-1} = x_{n-1}, \dots, X_0 = x_0) = \mathbb{P}(X_n = x_n \mid X_{n-1} = X_{n-1})$ $\forall n \geq 1$, $\forall x_0, x_1, \dots, x_n \in S$. That is, the distribution of $X_n$ depends only on the result of $X_{n-1}$.\\
\end{definition}
\begin{remark}
Notation: $X_0^n = (X_0, X_1, \dots, X_n)$.\\
\end{remark}
Notice that, in general,
\[\mathbb{P}(X_0^n = x_0^n) = \mathbb{P}(X_n = x_n \mid x_0^{n-1} = x_0^{n-1}) \cdot \mathbb{P}(X_0^{n-1} = x_0^{n-1}) = \dots = \prod_{i=1}^{n} \mathbb{P}(X_i = x_i \mid X_0^{i-1} = x_0^{i-1})\]
So for Markov chains,
\[\mathbb{P}(X_0^n = x_0^n) = \mathbb{P}(X_0 = x_0) \cdot \prod_{i=1}^{n}(X_i = x_i \mid X_{i-1} = X_{i-1})\]
\begin{remark}
Notation:
\[\mathbb{P}(X_n = y \mid X_{n-1} = x) = P_{x,y}(n)\]
This is called the transition probabilities index. That is, the probability of going from $x$ to $y$ at time $n$. This gives the transition probability matrix at time $n$:
\[P(n) = \{P_{x, y}(b)\}_{x, y \in S}\]
\end{remark}
\begin{definition}
Often the transition probabilities are not a function of $n$. In that case, we say that the Markov chain is time-homogeneous, and write
\[P = \{P_{x, y}\}_{x, y \in S}\]
\end{definition}
\subsection{Properties of Transition Probabilities Matrix}
Notice that for a Markov chain, $\mathbb{P}(X_0^n = x_0^n) = \mathbb{P}(X_0 = x_0) \cdot \prod_{i = 1}^{n} P_{x_{i-1}, x_i}P(i)$. So, two things determine a Markov chain
\begin{itemize}
\item The initial distribution $X_0$.
\item Transition probabilities.
\end{itemize}
So
\begin{itemize}
\item $0 \leq P_{x, y} \leq 1$.
\item $\sum_{y \in S} P_{x, y} = 1$ $\forall x \in S$.
\end{itemize}
\section{10/25/2018}
\subsection{Markov Chains}
For the time being we will consider finite state spaces (time-homogeneous Markov chains).\\
Say we want to understand the probability that we end up at a particular state at timestep $n$. We can do so by taking powers of the transition probability matrix. That is, $\mathbb{P}(X_n = y \mid X_0 = x) = (P^n)_{x, y}$.
\begin{proof}
This is true for $n=1$ by definition. For $n > 1$, we can use conditioning
\begin{align*}
&\mathbb{P}(X_n = y \mid X_0 = x) = \sum_{z \in S} \mathbb{P}(X_n = y, X_{n-1} = z \mid X_0 = x)\\
=&\sum_{z \in S} \mathbb{P}(X_n = y \mid X_{n-1} = z, X_0 = x)\\
=& \sum_{z \in S} \mathbb{P}(X_n = y \mid X_{n-1} = z) \mathbb{P}(X_{n-1} = z \mid X_0 = x)\\
=& \sum_{z \in S} P_{zy}(P^{n-1})_{xy} = (P^n)_{xy}
\end{align*}
\end{proof}
Now, think of what $P$ defines. $P$ is a matrix,
\begin{itemize}
\item It acts on column vectors to its right. Think of these column vectors as functions.
\item It acts on row vectors to the left. Think of these as probability measures.
\end{itemize}
Suppose $f: S \to \R$. Then $(P^n f)(x) = \mathbb{E}[f(X_n) \mid X_0 = x]$.
\begin{proof}
$(Pf)(x) = \sum_{y \in S} P_{xy} f(y) = \sum_{y \in S} \mathbb{P}(X_1 = y \mid X_0 = x)f(y) = \mathbb{E}[f(X_1) \mid X_0 = x]$.\\
Now suppose $\mu: S \to \R$ is a measure. Then $(\mu P^n)(x) = \mathbb{P}_\mu(X_n = x)$ here $\mathbb{P}_\mu$ indicates that the initial distribution is $\mu$.\\
\[(\mu P^n)(x) = \sum_{y \in S} \mu(y)(P^n)_{yx} = \sum_{y \in S} \mu(y) \mathbb{P}(X_n = x \mid X_0 = y)\]
\end{proof}
Note: $P$ acts to the right on functions.
\[l^\infty(S) = \{f: S \to \R \mid ||f||_\infty < \infty\]
Properties:
\begin{itemize}
\item Keeps positivity: If $f \geq 0$, then $Pf \geq 0$.\\
\item $P$ is a contraction: $||P||_{\infty, \infty} \leq 1$.\\
\item $||Pf||_\infty = \max_{x \in S}|(Pf)(x)| = \max_{x \in S} |\sum_y P_{xy} f(y)| \leq \max_{x \in S} \sum_y P_{xy}|f(y)| \leq \max{x \in S} \sum_y P_{xy} ||f||_\infty = ||f||_\infty$.