/
2017-12-04-deep-probabilistic-modelling-with-gaussian-processes.html
executable file
·1956 lines (1866 loc) · 218 KB
/
2017-12-04-deep-probabilistic-modelling-with-gaussian-processes.html
1
2
3
4
5
6
7
8
9
10
11
12
13
14
15
16
17
18
19
20
21
22
23
24
25
26
27
28
29
30
31
32
33
34
35
36
37
38
39
40
41
42
43
44
45
46
47
48
49
50
51
52
53
54
55
56
57
58
59
60
61
62
63
64
65
66
67
68
69
70
71
72
73
74
75
76
77
78
79
80
81
82
83
84
85
86
87
88
89
90
91
92
93
94
95
96
97
98
99
100
101
102
103
104
105
106
107
108
109
110
111
112
113
114
115
116
117
118
119
120
121
122
123
124
125
126
127
128
129
130
131
132
133
134
135
136
137
138
139
140
141
142
143
144
145
146
147
148
149
150
151
152
153
154
155
156
157
158
159
160
161
162
163
164
165
166
167
168
169
170
171
172
173
174
175
176
177
178
179
180
181
182
183
184
185
186
187
188
189
190
191
192
193
194
195
196
197
198
199
200
201
202
203
204
205
206
207
208
209
210
211
212
213
214
215
216
217
218
219
220
221
222
223
224
225
226
227
228
229
230
231
232
233
234
235
236
237
238
239
240
241
242
243
244
245
246
247
248
249
250
251
252
253
254
255
256
257
258
259
260
261
262
263
264
265
266
267
268
269
270
271
272
273
274
275
276
277
278
279
280
281
282
283
284
285
286
287
288
289
290
291
292
293
294
295
296
297
298
299
300
301
302
303
304
305
306
307
308
309
310
311
312
313
314
315
316
317
318
319
320
321
322
323
324
325
326
327
328
329
330
331
332
333
334
335
336
337
338
339
340
341
342
343
344
345
346
347
348
349
350
351
352
353
354
355
356
357
358
359
360
361
362
363
364
365
366
367
368
369
370
371
372
373
374
375
376
377
378
379
380
381
382
383
384
385
386
387
388
389
390
391
392
393
394
395
396
397
398
399
400
401
402
403
404
405
406
407
408
409
410
411
412
413
414
415
416
417
418
419
420
421
422
423
424
425
426
427
428
429
430
431
432
433
434
435
436
437
438
439
440
441
442
443
444
445
446
447
448
449
450
451
452
453
454
455
456
457
458
459
460
461
462
463
464
465
466
467
468
469
470
471
472
473
474
475
476
477
478
479
480
481
482
483
484
485
486
487
488
489
490
491
492
493
494
495
496
497
498
499
500
501
502
503
504
505
506
507
508
509
510
511
512
513
514
515
516
517
518
519
520
521
522
523
524
525
526
527
528
529
530
531
532
533
534
535
536
537
538
539
540
541
542
543
544
545
546
547
548
549
550
551
552
553
554
555
556
557
558
559
560
561
562
563
564
565
566
567
568
569
570
571
572
573
574
575
576
577
578
579
580
581
582
583
584
585
586
587
588
589
590
591
592
593
594
595
596
597
598
599
600
601
602
603
604
605
606
607
608
609
610
611
612
613
614
615
616
617
618
619
620
621
622
623
624
625
626
627
628
629
630
631
632
633
634
635
636
637
638
639
640
641
642
643
644
645
646
647
648
649
650
651
652
653
654
655
656
657
658
659
660
661
662
663
664
665
666
667
668
669
670
671
672
673
674
675
676
677
678
679
680
681
682
683
684
685
686
687
688
689
690
691
692
693
694
695
696
697
698
699
700
701
702
703
704
705
706
707
708
709
710
711
712
713
714
715
716
717
718
719
720
721
722
723
724
725
726
727
728
729
730
731
732
733
734
735
736
737
738
739
740
741
742
743
744
745
746
747
748
749
750
751
752
753
754
755
756
757
758
759
760
761
762
763
764
765
766
767
768
769
770
771
772
773
774
775
776
777
778
779
780
781
782
783
784
785
786
787
788
789
790
791
792
793
794
795
796
797
798
799
800
801
802
803
804
805
806
807
808
809
810
811
812
813
814
815
816
817
818
819
820
821
822
823
824
825
826
827
828
829
830
831
832
833
834
835
836
837
838
839
840
841
842
843
844
845
846
847
848
849
850
851
852
853
854
855
856
857
858
859
860
861
862
863
864
865
866
867
868
869
870
871
872
873
874
875
876
877
878
879
880
881
882
883
884
885
886
887
888
889
890
891
892
893
894
895
896
897
898
899
900
901
902
903
904
905
906
907
908
909
910
911
912
913
914
915
916
917
918
919
920
921
922
923
924
925
926
927
928
929
930
931
932
933
934
935
936
937
938
939
940
941
942
943
944
945
946
947
948
949
950
951
952
953
954
955
956
957
958
959
960
961
962
963
964
965
966
967
968
969
970
971
972
973
974
975
976
977
978
979
980
981
982
983
984
985
986
987
988
989
990
991
992
993
994
995
996
997
998
999
1000
---
title: "Deep Probabilistic Modelling with with Gaussian Processes"
venue: "NIPS Tutorial 2017"
abstract: "<p>Neural network models are algorithmically simple, but mathematically complex. Gaussian process models are mathematically simple, but algorithmically complex. In this tutorial we will explore Deep Gaussian Process models. They bring advantages in their mathematical simplicity but are challenging in their algorithmic complexity. We will give an overview of Gaussian processes and highlight the algorithmic approximations that allow us to stack Gaussian process models: they are based on variational methods. In the last part of the tutorial will explore a use case exemplar: uncertainty quantification. We end with open questions.</p>"
author:
- given: Neil D.
family: Lawrence
url: http://inverseprobability.com
institute: Amazon and University of Sheffield
twitter: lawrennd
gscholar: r3SJcvoAAAAJ
orchid:
date: 2017-12-04
published: 2017-12-04
reveal: 2017-12-04-deep-probabilistic-modelling-with-gaussian-processes.slides.html
ipynb: 2017-12-04-deep-probabilistic-modelling-with-gaussian-processes.ipynb
youtube: "NHTGY8VCinY"
layout: talk
categories:
- notes
---
<div style="display:none">
$${% include talk-notation.tex %}$$
</div>
<h2 id="what-is-machine-learning">What is Machine Learning?</h2>
<p>What is machine learning? At its most basic level machine learning is a combination of</p>
<p><span class="math display">\[ \text{data} + \text{model} \xrightarrow{\text{compute}} \text{prediction}\]</span></p>
<p>where <em>data</em> is our observations. They can be actively or passively acquired (meta-data). The <em>model</em> contains our assumptions, based on previous experience. That experience can be other data, it can come from transfer learning, or it can merely be our beliefs about the regularities of the universe. In humans our models include our inductive biases. The <em>prediction</em> is an action to be taken or a categorization or a quality score. The reason that machine learning has become a mainstay of artificial intelligence is the importance of predictions in artificial intelligence. The data and the model are combined through computation.</p>
<p>In practice we normally perform machine learning using two functions. To combine data with a model we typically make use of:</p>
<p><strong>a prediction function</strong> a function which is used to make the predictions. It includes our beliefs about the regularities of the universe, our assumptions about how the world works, e.g. smoothness, spatial similarities, temporal similarities.</p>
<p><strong>an objective function</strong> a function which defines the cost of misprediction. Typically it includes knowledge about the world's generating processes (probabilistic objectives) or the costs we pay for mispredictions (empiricial risk minimization).</p>
<p>The combination of data and model through the prediction function and the objectie function leads to a <em>learning algorithm</em>. The class of prediction functions and objective functions we can make use of is restricted by the algorithms they lead to. If the prediction function or the objective function are too complex, then it can be difficult to find an appropriate learning algorithm. Much of the acdemic field of machine learning is the quest for new learning algorithms that allow us to bring different types of models and data together.</p>
<p>A useful reference for state of the art in machine learning is the UK Royal Society Report, <a href="https://royalsociety.org/~/media/policy/projects/machine-learning/publications/machine-learning-report.pdf">Machine Learning: Power and Promise of Computers that Learn by Example</a>.</p>
<p>You can also check my blog post on <a href="http://inverseprobability.com/2017/07/17/what-is-machine-learning">"What is Machine Learning?"</a></p>
<h3 id="artificial-intelligence">Artificial Intelligence</h3>
<h3 id="uncertainty">Uncertainty</h3>
<p>In practice, we normally also have uncertainty associated with these functions. Uncertainty in the prediction function arises from</p>
<ol style="list-style-type: decimal">
<li>scarcity of training data and</li>
<li>mismatch between the set of prediction functions we choose and all possible prediction functions.</li>
</ol>
<p>There are also challenges around specification of the objective function, but for we will save those for another day. For the moment, let us focus on the prediction function.</p>
<h3 id="neural-networks-and-prediction-functions">Neural Networks and Prediction Functions</h3>
<p>Neural networks are adaptive non-linear function models. Originally, they were studied (by McCulloch and Pitts <span class="citation">(McCulloch and Pitts, 1943)</span>) as simple models for neurons, but over the last decade they have become popular because they are a flexible approach to modelling complex data. A particular characteristic of neural network models is that they can be composed to form highly complex functions which encode many of our expectations of the real world. They allow us to encode our assumptions about how the world works.</p>
<p>We will return to composition later, but for the moment, let's focus on a one hidden layer neural network. We are interested in the prediction function, so we'll ignore the objective function (which is often called an error function) for the moment, and just describe the mathematical object of interest</p>
<p><span class="math display">\[
\mappingFunction(\inputVector) = \mappingMatrix^\top \activationVector(\mappingMatrixTwo, \inputVector)
\]</span></p>
<p>Where in this case <span class="math inline">\(\mappingFunction(\cdot)\)</span> is a scalar function with vector inputs, and <span class="math inline">\(\activationVector(\cdot)\)</span> is a vector function with vector inputs. The dimensionality of the vector function is known as the number of hidden units, or the number of neurons. The elements of this vector function are known as the <em>activation</em> function of the neural network and <span class="math inline">\(\mappingMatrixTwo\)</span> are the parameters of the activation functions.</p>
<h3 id="relations-with-classical-statistics">Relations with Classical Statistics</h3>
<p>In statistics activation functions are traditionally known as <em>basis functions</em>. And we would think of this as a <em>linear model</em>. It's doesn't make linear predictions, but it's linear because in statistics estimation focuses on the parameters, <span class="math inline">\(\mappingMatrix\)</span>, not the parameters, <span class="math inline">\(\mappingMatrixTwo\)</span>. The linear model terminology refers to the fact that the model is <em>linear in the parameters</em>, but it is <em>not</em> linear in the data unless the activation functions are chosen to be linear.</p>
<h3 id="adaptive-basis-functions">Adaptive Basis Functions</h3>
<p>The first difference in the (early) neural network literature to the classical statistical literature is the decision to optimize these parameters, <span class="math inline">\(\mappingMatrixTwo\)</span>, as well as the parameters, <span class="math inline">\(\mappingMatrix\)</span> (which would normally be denoted in statistics by <span class="math inline">\(\boldsymbol{\beta}\)</span>)<a href="#fn1" class="footnoteRef" id="fnref1"><sup>1</sup></a>.</p>
<p>In this tutorial, we're going to go revisit that decision, and follow the path of Radford Neal <span class="citation">(Neal, 1994)</span> who, inspired by work of David MacKay <span class="citation">(MacKay, 1992)</span> and others did his PhD thesis on Bayesian Neural Networks. If we take a Bayesian approach to parameter inference (note I am using inference here in the classical sense, not in the sense of prediction of test data, which seems to be a newer usage), then we don't wish to fit parameters at all, rather we wish to integrate them away and understand the family of functions that the model describes.</p>
<h3 id="probabilistic-modelling">Probabilistic Modelling</h3>
<p>This Bayesian approach is designed to deal with uncertainty arising from fitting our prediction function to the data we have, a reduced data set.</p>
<p>The Bayesian approach can be derived from a broader understanding of what our objective is. If we accept that we can jointly represent all things that happen in the world with a probability distribution, then we can interogate that probability to make predictions. So, if we are interested in predictions, <span class="math inline">\(\dataScalar_*\)</span> at future points input locations of interest, <span class="math inline">\(\inputVector_*\)</span> given previously training data, <span class="math inline">\(\dataVector\)</span> and corresponding inputs, <span class="math inline">\(\inputMatrix\)</span>, then we are really interogating the following probability density, <span class="math display">\[
p(\dataScalar_*|\dataVector, \inputMatrix, \inputVector_*),
\]</span> there is nothing controversial here, as long as you accept that you have a good joint model of the world around you that relates test data to training data, <span class="math inline">\(p(\dataScalar_*, \dataVector, \inputMatrix, \inputVector_*)\)</span> then this conditional distribution can be recovered through standard rules of probability (<span class="math inline">\(\text{data} + \text{model} \rightarrow \text{prediction}\)</span>).</p>
<p>We can construct this joint density through the use of the following decomposition: <span class="math display">\[
p(\dataScalar_*|\dataVector, \inputMatrix, \inputVector_*) = \int p(\dataScalar_*|\inputVector_*, \mappingMatrix) p(\mappingMatrix | \dataVector, \inputMatrix) \text{d} \mappingMatrix
\]</span></p>
<p>where, for convenience, we are assuming <em>all</em> the parameters of the model are now represented by <span class="math inline">\(\parameterVector\)</span> (which contains <span class="math inline">\(\mappingMatrix\)</span> and <span class="math inline">\(\mappingMatrixTwo\)</span>) and <span class="math inline">\(p(\parameterVector | \dataVector, \inputMatrix)\)</span> is recognised as the posterior density of the parameters given data and <span class="math inline">\(p(\dataScalar_*|\inputVector_*, \parameterVector)\)</span> is the <em>likelihood</em> of an individual test data point given the parameters.</p>
<p>The likelihood of the data is normally assumed to be independent across the parameters, <span class="math display">\[
p(\dataVector|\inputMatrix, \mappingMatrix) \prod_{i=1}^\numData p(\dataScalar_i|\inputVector_i, \mappingMatrix),\]</span></p>
<p>and if that is so, it is easy to extend our predictions across all future, potential, locations, <span class="math display">\[
p(\dataVector_*|\dataVector, \inputMatrix, \inputMatrix_*) = \int p(\dataVector_*|\inputMatrix_*, \parameterVector) p(\parameterVector | \dataVector, \inputMatrix) \text{d} \parameterVector.
\]</span></p>
<p>The likelihood is also where the <em>prediction function</em> is incorporated. For example in the regression case, we consider an objective based around the Gaussian density, <span class="math display">\[
p(\dataScalar_i | \mappingFunction(\inputVector_i)) = \frac{1}{\sqrt{2\pi \dataStd^2}} \exp\left(-\frac{\left(\dataScalar_i - \mappingFunction(\inputVector_i)\right)^2}{2\dataStd^2}\right)
\]</span></p>
<p>In short, that is the classical approach to probabilistic inference, and all approaches to Bayesian neural networks fall within this path. For a deep probabilistic model, we can simply take this one stage further and place a probability distribution over the input locations, <span class="math display">\[
p(\dataVector_*|\dataVector) = \int p(\dataVector_*|\inputMatrix_*, \parameterVector) p(\parameterVector | \dataVector, \inputMatrix) p(\inputMatrix) p(\inputMatrix_*) \text{d} \parameterVector \text{d} \inputMatrix \text{d}\inputMatrix_*
\]</span> and we have <em>unsupervised learning</em> (from where we can get deep generative models).</p>
<h3 id="graphical-models">Graphical Models</h3>
<p>One way of representing a joint distribution is to consider conditional dependencies between data. Conditional dependencies allow us to factorize the distribution. For example, a Markov chain is a factorization of a distribution into components that represent the conditional relationships between points that are neighboring, often in time or space. It can be decomposed in the following form. <span class="math display">\[p(\dataVector) = p(\dataScalar_\numData | \dataScalar_{\numData-1}) p(\dataScalar_{\numData-1}|\dataScalar_{\numData-2}) \dots p(\dataScalar_{2} | \dataScalar_{1})\]</span></p>
<div class="sourceCode"><pre class="sourceCode python"><code class="sourceCode python"><span class="im">import</span> daft
<span class="im">from</span> matplotlib <span class="im">import</span> rc
rc(<span class="st">"font"</span>, <span class="op">**</span>{<span class="st">'family'</span>:<span class="st">'sans-serif'</span>,<span class="st">'sans-serif'</span>:[<span class="st">'Helvetica'</span>]}, size<span class="op">=</span><span class="dv">30</span>)
rc(<span class="st">"text"</span>, usetex<span class="op">=</span><span class="va">True</span>)</code></pre></div>
<div class="sourceCode"><pre class="sourceCode python"><code class="sourceCode python">pgm <span class="op">=</span> daft.PGM(shape<span class="op">=</span>[<span class="dv">3</span>, <span class="dv">1</span>],
origin<span class="op">=</span>[<span class="dv">0</span>, <span class="dv">0</span>],
grid_unit<span class="op">=</span><span class="dv">5</span>,
node_unit<span class="op">=</span><span class="fl">1.9</span>,
observed_style<span class="op">=</span><span class="st">'shaded'</span>,
line_width<span class="op">=</span><span class="dv">3</span>)
pgm.add_node(daft.Node(<span class="st">"y_1"</span>, <span class="vs">r"$y_1$"</span>, <span class="fl">0.5</span>, <span class="fl">0.5</span>, fixed<span class="op">=</span><span class="va">False</span>))
pgm.add_node(daft.Node(<span class="st">"y_2"</span>, <span class="vs">r"$y_2$"</span>, <span class="fl">1.5</span>, <span class="fl">0.5</span>, fixed<span class="op">=</span><span class="va">False</span>))
pgm.add_node(daft.Node(<span class="st">"y_3"</span>, <span class="vs">r"$y_3$"</span>, <span class="fl">2.5</span>, <span class="fl">0.5</span>, fixed<span class="op">=</span><span class="va">False</span>))
pgm.add_edge(<span class="st">"y_1"</span>, <span class="st">"y_2"</span>)
pgm.add_edge(<span class="st">"y_2"</span>, <span class="st">"y_3"</span>)
pgm.render().figure.savefig(<span class="st">"../slides/diagrams/ml/markov.svg"</span>, transparent<span class="op">=</span><span class="va">True</span>)</code></pre></div>
<object class="svgplot" align data="../slides/diagrams/ml/markov.svg">
</object>
<p>By specifying conditional independencies we can reduce the parameterization required for our data, instead of directly specifying the parameters of the joint distribution, we can specify each set of parameters of the conditonal independently. This can also give an advantage in terms of interpretability. Understanding a conditional independence structure gives a structured understanding of data. If developed correctly, according to causal methodology, it can even inform how we should intervene in the system to drive a desired result <span class="citation">(Pearl, 1995)</span>.</p>
<p>However, a challenge arise when the data becomes more complex. Consider the graphical model shown below, used to predict the perioperative risk of <em>C Difficile</em> infection following colon surgery <span class="citation">(Steele et al., 2012)</span>.</p>
<p><img class="negate" src="../slides/diagrams/bayes-net-diagnosis.png" width="40%" align="center" style="background:none; border:none; box-shadow:none;"></p>
<p>To capture the complexity in the interelationship between the data the graph becomes more complex, and less interpretable.</p>
<h3 id="performing-inference">Performing Inference</h3>
<p>As far as combining our data and our model to form our prediction, the devil is in the detail. While everything is easy to write in terms of probability densities, as we move from <span class="math inline">\(\text{data}\)</span> and <span class="math inline">\(\text{model}\)</span> to <span class="math inline">\(\text{prediction}\)</span> there is that simple <span class="math inline">\(\xrightarrow{\text{compute}}\)</span> sign, which is now burying a wealth of difficulties. Each integral sign above is a high dimensional integral which will typically need approximation. Approximations also come with computational demands. As we consider more complex classes of functions, the challenges around the integrals become harder and prediction of future test data given our model and the data becomes so involved as to be impractical or impossible.</p>
<p>Statisticians realized these challenges early on, indeed, so early that they were actually physicists, both Laplace and Gauss worked on models such as this, in Gauss's case he made his career on prediction of the location of the lost planet (later reclassified as a asteroid, then dwarf planet), Ceres. Gauss and Laplace made use of maximum a posteriori estimates for simplifying their computations and Laplace developed Laplace's method (and invented the Gaussian density) to expand around that mode. But classical statistics needs better guarantees around model performance and interpretation, and as a result has focussed more on the <em>linear</em> model implied by <span class="math display">\[
\mappingFunction(\inputVector) = \left.\mappingVector^{(2)}\right.^\top \activationVector(\mappingMatrix_1, \inputVector)
\]</span></p>
<p><span class="math display">\[
\mappingVector^{(2)} \sim \gaussianSamp{\zerosVector}{\covarianceMatrix}.
\]</span></p>
<p>The Gaussian likelihood given above implies that the data observation is related to the function by noise corruption so we have, <span class="math display">\[
\dataScalar_i = \mappingFunction(\inputVector_i) + \noiseScalar_i,
\]</span> where <span class="math display">\[
\noiseScalar_i \sim \gaussianSamp{0}{\dataStd^2}
\]</span> and while normally integrating over high dimensional parameter vectors is highly complex, here it is <em>trivial</em>. That is because of a property of the multivariate Gaussian.</p>
<h3 id="multivariate-gaussian-properties">Multivariate Gaussian Properties</h3>
<p>Gaussian processes are initially of interest because</p>
<ol style="list-style-type: decimal">
<li>linear Gaussian models are easier to deal with</li>
<li>Even the parameters <em>within</em> the process can be handled, by considering a particular limit.</li>
</ol>
<p>Let's first of all review the properties of the multivariate Gaussian distribution that make linear Gaussian models easier to deal with. We'll return to the, perhaps surprising, result on the parameters within the nonlinearity, <span class="math inline">\(\parameterVector\)</span>, shortly.</p>
<p>To work with linear Gaussian models, to find the marginal likelihood all you need to know is the following rules. If <span class="math display">\[
\dataVector = \mappingMatrix \inputVector + \noiseVector,
\]</span> where <span class="math inline">\(\dataVector\)</span>, <span class="math inline">\(\inputVector\)</span> and <span class="math inline">\(\noiseVector\)</span> are vectors and we assume that <span class="math inline">\(\inputVector\)</span> and <span class="math inline">\(\noiseVector\)</span> are drawn from multivariate Gaussians, <span class="math display">\[\begin{align}
\inputVector & \sim \gaussianSamp{\meanVector}{\covarianceMatrix}\\
\noiseVector & \sim \gaussianSamp{\zerosVector}{\covarianceMatrixTwo}
\end{align}\]</span> then we know that <span class="math inline">\(\dataVector\)</span> is also drawn from a multivariate Gaussian with, <span class="math display">\[
\dataVector \sim \gaussianSamp{\mappingMatrix\meanVector}{\mappingMatrix\covarianceMatrix\mappingMatrix^\top + \covarianceMatrixTwo}.
\]</span> With apprioriately defined covariance, <span class="math inline">\(\covarianceTwoMatrix\)</span>, this is actually the marginal likelihood for Factor Analysis, or Probabilistic Principal Component Analysis <span class="citation">(Tipping and Bishop, 1999)</span>, because we integrated out the inputs (or <em>latent</em> variables they would be called in that case).</p>
<p>However, we are focussing on what happens in models which are non-linear in the inputs, whereas the above would be <em>linear</em> in the inputs. To consider these, we introduce a matrix, called the design matrix. We set each activation function computed at each data point to be <span class="math display">\[
\activationScalar_{i,j} = \activationScalar(\mappingVector^{(1)}_{j}, \inputVector_{i})
\]</span> and define the matrix of activations (known as the <em>design matrix</em> in statistics) to be, <span class="math display">\[
\activationMatrix =
\begin{bmatrix}
\activationScalar_{1, 1} & \activationScalar_{1, 2} & \dots & \activationScalar_{1, \numHidden} \\
\activationScalar_{1, 2} & \activationScalar_{1, 2} & \dots & \activationScalar_{1, \numData} \\
\vdots & \vdots & \ddots & \vdots \\
\activationScalar_{\numData, 1} & \activationScalar_{\numData, 2} & \dots & \activationScalar_{\numData, \numHidden}
\end{bmatrix}.
\]</span> By convention this matrix always has <span class="math inline">\(\numData\)</span> rows and <span class="math inline">\(\numHidden\)</span> columns, now if we define the vector of all noise corruptions, <span class="math inline">\(\noiseVector = \left[\noiseScalar_1, \dots \noiseScalar_\numData\right]^\top\)</span>.</p>
<h3 id="matrix-representation-of-a-neural-network">Matrix Representation of a Neural Network</h3>
<p><span class="math display">\[\dataScalar\left(\inputVector\right) = \activationVector\left(\inputVector\right)^\top \mappingVector + \noiseScalar\]</span></p>
<div class="incremental">
<p><span class="math display">\[\dataVector = \activationMatrix\mappingVector + \noiseVector\]</span></p>
</div>
<div class="incremental">
<p><span class="math display">\[\noiseVector \sim \gaussianSamp{\zerosVector}{\dataStd^2\eye}\]</span></p>
<p>{ If we define the prior distribution over the vector <span class="math inline">\(\mappingVector\)</span> to be Gaussian,} <span class="math display">\[
\mappingVector \sim \gaussianSamp{\zerosVector}{\alpha\eye},
\]</span></p>
<p>{ then we can use rules of multivariate Gaussians to see that,} <span class="math display">\[
\dataVector \sim \gaussianSamp{\zerosVector}{\alpha \activationMatrix \activationMatrix^\top + \dataStd^2 \eye}.
\]</span></p>
<p>In other words, our training data is distributed as a multivariate Gaussian, with zero mean and a covariance given by <span class="math display">\[
\kernelMatrix = \alpha \activationMatrix \activationMatrix^\top + \dataStd^2 \eye.
\]</span></p>
<p>This is an <span class="math inline">\(\numData \times \numData\)</span> size matrix. Its elements are in the form of a function. The maths shows that any element, index by <span class="math inline">\(i\)</span> and <span class="math inline">\(j\)</span>, is a function <em>only</em> of inputs associated with data points <span class="math inline">\(i\)</span> and <span class="math inline">\(j\)</span>, <span class="math inline">\(\dataVector_i\)</span>, <span class="math inline">\(\dataVector_j\)</span>. <span class="math inline">\(\kernel_{i,j} = \kernel\left(\inputVector_i, \inputVector_j\right)\)</span></p>
<p>If we look at the portion of this function associated only with <span class="math inline">\(\mappingFunction(\cdot)\)</span>, i.e. we remove the noise, then we can write down the covariance associated with our neural network, <span class="math display">\[
\kernel_\mappingFunction\left(\inputVector_i, \inputVector_j\right) = \alpha \activationVector\left(\mappingMatrix_1, \inputVector_i\right)^\top \activationVector\left(\mappingMatrix_1, \inputVector_j\right)
\]</span> so the elements of the covariance or <em>kernel</em> matrix are formed by inner products of the rows of the <em>design matrix</em>.</p>
<p>This is the essence of a Gaussian process. Instead of making assumptions about our density over each data point, <span class="math inline">\(\dataScalar_i\)</span> as i.i.d. we make a joint Gaussian assumption over our data. The covariance matrix is now a function of both the parameters of the activation function, <span class="math inline">\(\mappingMatrixTwo\)</span>, and the input variables, <span class="math inline">\(\inputMatrix\)</span>. This comes about through integrating out the parameters of the model, <span class="math inline">\(\mappingVector\)</span>.</p>
<p>We can basically put anything inside the basis functions, and many people do. These can be deep kernels <span class="citation">(Cho and Saul, 2009)</span> or we can learn the parameters of a convolutional neural network inside there.</p>
<p>Viewing a neural network in this way is also what allows us to beform sensible <em>batch</em> normalizations <span class="citation">(Ioffe and Szegedy, 2015)</span>.</p>
</div>
<h3 id="non-degenerate-gaussian-processes">Non-degenerate Gaussian Processes</h3>
<p>The process described above is degenerate. The covariance function is of rank at most <span class="math inline">\(\numHidden\)</span> and since the theoretical amount of data could always increase <span class="math inline">\(\numData \rightarrow \infty\)</span>, the covariance function is not full rank. This means as we increase the amount of data to infinity, there will come a point where we can't normalize the process because the multivariate Gaussian has the form, <span class="math display">\[
\gaussianDist{\mappingFunctionVector}{\zerosVector}{\kernelMatrix} = \frac{1}{\left(2\pi\right)^{\frac{\numData}{2}}\det{\kernelMatrix}^\frac{1}{2}} \exp\left(-\frac{\mappingFunctionVector^\top\kernelMatrix \mappingFunctionVector}{2}\right)
\]</span> and a non-degenerate kernel matrix leads to <span class="math inline">\(\det{\kernelMatrix} = 0\)</span> defeating the normalization (it's equivalent to finding a projection in the high dimensional Gaussian where the variance of the the resulting univariate Gaussian is zero, i.e. there is a null space on the covariance, or alternatively you can imagine there are one or more directions where the Gaussian has become the delta function).</p>
<p>In the machine learning field, it was Radford Neal <span class="citation">(Neal, 1994)</span> that realized the potential of the next step. In his 1994 thesis, he was considering Bayesian neural networks, of the type we described above, and in considered what would happen if you took the number of hidden nodes, or neurons, to infinity, i.e. <span class="math inline">\(\numHidden \rightarrow \infty\)</span>.</p>
<p><a href="http://www.cs.toronto.edu/~radford/ftp/thesis.pdf"><img class="" src="../slides/diagrams/neal-infinite-priors.png" width="80%" align="" style="background:none; border:none; box-shadow:none;"></a></p>
<p><em>Page 37 of Radford Neal's 1994 thesis</em></p>
<p>In loose terms, what Radford considers is what happens to the elements of the covariance function, <span class="math display">\[
\begin{align*}
\kernel_\mappingFunction\left(\inputVector_i, \inputVector_j\right) & = \alpha \activationVector\left(\mappingMatrix_1, \inputVector_i\right)^\top \activationVector\left(\mappingMatrix_1, \inputVector_j\right)\\
& = \alpha \sum_k \activationScalar\left(\mappingVector^{(1)}_k, \inputVector_i\right) \activationScalar\left(\mappingVector^{(1)}_k, \inputVector_j\right)
\end{align*}
\]</span> if instead of considering a finite number you sample infinitely many of these activation functions, sampling parameters from a prior density, <span class="math inline">\(p(\mappingVectorTwo)\)</span>, for each one, <span class="math display">\[
\kernel_\mappingFunction\left(\inputVector_i, \inputVector_j\right) = \alpha \int \activationScalar\left(\mappingVector^{(1)}, \inputVector_i\right) \activationScalar\left(\mappingVector^{(1)}, \inputVector_j\right) p(\mappingVector^{(1)}) \text{d}\mappingVector^{(1)}
\]</span> And that's not <em>only</em> for Gaussian <span class="math inline">\(p(\mappingVectorTwo)\)</span>. In fact this result holds for a range of activations, and a range of prior densities because of the <em>central limit theorem</em>.</p>
<p>To write it in the form of a probabilistic program, as long as the distribution for <span class="math inline">\(\phi_i\)</span> implied by this short probabilistic program, <span class="math display">\[
\begin{align*}
\mappingVectorTwo & \sim p(\cdot)\\
\phi_i & = \activationScalar\left(\mappingVectorTwo, \inputVector_i\right),
\end{align*}
\]</span> has finite variance, then the result of taking the number of hidden units to infinity, with appropriate scaling, is also a Gaussian process.</p>
<h3 id="further-reading">Further Reading</h3>
<p>To understand this argument in more detail, I highly recommend reading chapter 2 of Neal's thesis, which remains easy to read and clear today. Indeed, for readers interested in Bayesian neural networks, both Raford Neal's and David MacKay's PhD thesis <span class="citation">(MacKay, 1992)</span> remain essential reading. Both theses embody a clarity of thought, and an ability to weave together threads from different fields that was the business of machine learning in the 1990s. Radford and David were also pioneers in making their software widely available and publishing material on the web.</p>
<div class="sourceCode"><pre class="sourceCode python"><code class="sourceCode python"><span class="im">import</span> numpy <span class="im">as</span> np
<span class="im">import</span> teaching_plots <span class="im">as</span> plot</code></pre></div>
<div class="sourceCode"><pre class="sourceCode python"><code class="sourceCode python"><span class="op">%</span>load <span class="op">-</span>s compute_kernel mlai.py</code></pre></div>
<div class="sourceCode"><pre class="sourceCode python"><code class="sourceCode python"><span class="op">%</span>load <span class="op">-</span>s eq_cov mlai.py</code></pre></div>
<div class="sourceCode"><pre class="sourceCode python"><code class="sourceCode python">np.random.seed(<span class="dv">10</span>)
plot.rejection_samples(compute_kernel, kernel<span class="op">=</span>eq_cov,
lengthscale<span class="op">=</span><span class="fl">0.25</span>, diagrams<span class="op">=</span><span class="st">'../slides/diagrams/gp'</span>)</code></pre></div>
<!-- ### Two Dimensional Gaussian Distribution -->
<!-- include{_ml/includes/two-d-gaussian.md} -->
<h3 id="distributions-over-functions">Distributions over Functions</h3>
<div class="sourceCode"><pre class="sourceCode python"><code class="sourceCode python"><span class="im">import</span> numpy <span class="im">as</span> np
np.random.seed(<span class="dv">4949</span>)</code></pre></div>
<div class="sourceCode"><pre class="sourceCode python"><code class="sourceCode python"><span class="im">import</span> teaching_plots <span class="im">as</span> plot
<span class="im">import</span> pods</code></pre></div>
<h3 id="sampling-a-function" data-transition="none">Sampling a Function</h3>
<p><strong>Multi-variate Gaussians</strong></p>
<ul>
<li><p>We will consider a Gaussian with a particular structure of covariance matrix.</p></li>
<li><p>Generate a single sample from this 25 dimensional Gaussian distribution, <span class="math inline">\(\mappingFunctionVector=\left[\mappingFunction_{1},\mappingFunction_{2}\dots \mappingFunction_{25}\right]\)</span>.</p></li>
<li><p>We will plot these points against their index.</p></li>
</ul>
<div class="sourceCode"><pre class="sourceCode python"><code class="sourceCode python"><span class="op">%</span>load <span class="op">-</span>s compute_kernel mlai.py</code></pre></div>
<div class="sourceCode"><pre class="sourceCode python"><code class="sourceCode python"><span class="op">%</span>load <span class="op">-</span>s polynomial_cov mlai.py</code></pre></div>
<div class="sourceCode"><pre class="sourceCode python"><code class="sourceCode python"><span class="op">%</span>load <span class="op">-</span>s exponentiated_quadratic mlai.py</code></pre></div>
<div class="sourceCode"><pre class="sourceCode python"><code class="sourceCode python">plot.two_point_sample(compute_kernel, kernel<span class="op">=</span>exponentiated_quadratic,
lengthscale<span class="op">=</span><span class="fl">0.5</span>, diagrams<span class="op">=</span><span class="st">'../slides/diagrams/gp'</span>)</code></pre></div>
<div class="sourceCode"><pre class="sourceCode python"><code class="sourceCode python">pods.notebook.display_plots(<span class="st">'two_point_sample</span><span class="sc">{sample:0>3}</span><span class="st">.svg'</span>,
<span class="st">'../slides/diagrams/gp'</span>, sample<span class="op">=</span>(<span class="dv">0</span>,<span class="dv">13</span>))</code></pre></div>
<h3 id="uluru">Uluru</h3>
<p><img class="" src="../slides/diagrams/gp/799px-Uluru_Panorama.jpg" width="" align="" style="background:none; border:none; box-shadow:none;"></p>
<h3 id="prediction-with-correlated-gaussians">Prediction with Correlated Gaussians</h3>
<ul>
<li><p>Prediction of <span class="math inline">\(\mappingFunction_2\)</span> from <span class="math inline">\(\mappingFunction_1\)</span> requires <em>conditional density</em>.</p></li>
<li><p>Conditional density is <em>also</em> Gaussian. <span class="math display">\[
p(\mappingFunction_2|\mappingFunction_1) = \gaussianDist{\mappingFunction_2}{\frac{\kernelScalar_{1, 2}}{\kernelScalar_{1, 1}}\mappingFunction_1}{ \kernelScalar_{2, 2} - \frac{\kernelScalar_{1,2}^2}{\kernelScalar_{1,1}}}
\]</span> where covariance of joint density is given by <span class="math display">\[
\kernelMatrix = \begin{bmatrix} \kernelScalar_{1, 1} & \kernelScalar_{1, 2}\\ \kernelScalar_{2, 1} & \kernelScalar_{2, 2}\end{bmatrix}
\]</span></p></li>
</ul>
<div class="sourceCode"><pre class="sourceCode python"><code class="sourceCode python">pods.notebook.display_plots(<span class="st">'two_point_sample</span><span class="sc">{sample:0>3}</span><span class="st">.svg'</span>,
<span class="st">'../slides/diagrams/gp'</span>, sample<span class="op">=</span>(<span class="dv">13</span>,<span class="dv">17</span>))</code></pre></div>
<h3 id="key-object">Key Object</h3>
<ul>
<li><p>Covariance function, <span class="math inline">\(\kernelMatrix\)</span></p></li>
<li><p>Determines properties of samples.</p></li>
<li><p>Function of <span class="math inline">\(\inputMatrix\)</span>, <span class="math display">\[\kernelScalar_{i,j} = \kernelScalar(\inputVector_i, \inputVector_j)\]</span></p></li>
</ul>
<h3 id="linear-algebra">Linear Algebra</h3>
<ul>
<li><p>Posterior mean</p>
<p><span class="math display">\[\mappingFunction_D(\inputVector_*) = \kernelVector(\inputVector_*, \inputMatrix) \kernelMatrix^{-1}
\mathbf{y}\]</span></p></li>
<li><p>Posterior covariance <span class="math display">\[\mathbf{C}_* = \kernelMatrix_{*,*} - \kernelMatrix_{*,\mappingFunctionVector}
\kernelMatrix^{-1} \kernelMatrix_{\mappingFunctionVector, *}\]</span></p></li>
</ul>
<h3 id="linear-algebra-1">Linear Algebra</h3>
<ul>
<li><p>Posterior mean</p>
<p><span class="math display">\[\mappingFunction_D(\inputVector_*) = \kernelVector(\inputVector_*, \inputMatrix) \boldsymbol{\alpha}\]</span></p></li>
<li><p>Posterior covariance <span class="math display">\[\covarianceMatrix_* = \kernelMatrix_{*,*} - \kernelMatrix_{*,\mappingFunctionVector}
\kernelMatrix^{-1} \kernelMatrix_{\mappingFunctionVector, *}\]</span></p></li>
</ul>
<h3 id="section"></h3>
<object class="svgplot" align data="../slides/diagrams/gp_prior_samples_data.svg">
</object>
<h3 id="section-1"></h3>
<object class="svgplot" align data="../slides/diagrams/gp_rejection_samples.svg">
</object>
<h3 id="section-2"></h3>
<object class="svgplot" align data="../slides/diagrams/gp_prediction.svg">
</object>
<div class="sourceCode"><pre class="sourceCode python"><code class="sourceCode python"><span class="op">%</span>load <span class="op">-</span>s eq_cov mlai.py</code></pre></div>
<div class="sourceCode"><pre class="sourceCode python"><code class="sourceCode python"><span class="im">import</span> teaching_plots <span class="im">as</span> plot
<span class="im">import</span> mlai
<span class="im">import</span> numpy <span class="im">as</span> np</code></pre></div>
<div class="sourceCode"><pre class="sourceCode python"><code class="sourceCode python">K, anim<span class="op">=</span>plot.animate_covariance_function(mlai.compute_kernel,
kernel<span class="op">=</span>eq_cov, lengthscale<span class="op">=</span><span class="fl">0.2</span>)</code></pre></div>
<div class="sourceCode"><pre class="sourceCode python"><code class="sourceCode python"><span class="im">from</span> IPython.core.display <span class="im">import</span> HTML</code></pre></div>
<div class="sourceCode"><pre class="sourceCode python"><code class="sourceCode python">plot.save_animation(anim,
diagrams<span class="op">=</span><span class="st">'../slides/diagrams/kern'</span>,
filename<span class="op">=</span><span class="st">'eq_covariance.html'</span>)</code></pre></div>
<h3 id="exponentiated-quadratic-covariance">Exponentiated Quadratic Covariance</h3>
<p>The exponentiated quadratic covariance, also known as the Gaussian covariance or the RBF covariance and the squared exponential. Covariance between two points is related to the negative exponential of the squared distnace between those points. This covariance function can be derived in a few different ways: as the infinite limit of a radial basis function neural network, as diffusion in the heat equation, as a Gaussian filter in <em>Fourier space</em> or as the composition as a series of linear filters applied to a base function.</p>
The covariance takes the following form, <span class="math display">\[
\kernelScalar(\inputVector, \inputVector^\prime) = \alpha \exp\left(-\frac{\ltwoNorm{\inputVector - \inputVector^\prime}^2}{2\ell^2}\right)
\]</span> where <span class="math inline">\(\ell\)</span> is the <em>length scale</em> or <em>time scale</em> of the process and <span class="math inline">\(\alpha\)</span> represents the overall process variance.
<table>
<tr>
<td width="50%">
<object class="svgplot" align data="../slides/diagrams/kern/eq_covariance.svg">
</object>
</td>
<td width="50%">
<iframe src="../slides/diagrams/kern/eq_covariance.html" width="512" height="384" allowtransparency="true" frameborder="0">
</iframe>
</td>
</tr>
</table>
<div class="sourceCode"><pre class="sourceCode python"><code class="sourceCode python"><span class="im">import</span> numpy <span class="im">as</span> np
<span class="im">import</span> matplotlib.pyplot <span class="im">as</span> plt
<span class="im">import</span> pods
<span class="im">import</span> teaching_plots <span class="im">as</span> plot
<span class="im">import</span> mlai</code></pre></div>
<h3 id="olympic-marathon-data">Olympic Marathon Data</h3>
<p>The first thing we will do is load a standard data set for regression modelling. The data consists of the pace of Olympic Gold Medal Marathon winners for the Olympics from 1896 to present. First we load in the data and plot.</p>
<div class="sourceCode"><pre class="sourceCode python"><code class="sourceCode python">data <span class="op">=</span> pods.datasets.olympic_marathon_men()
x <span class="op">=</span> data[<span class="st">'X'</span>]
y <span class="op">=</span> data[<span class="st">'Y'</span>]
offset <span class="op">=</span> y.mean()
scale <span class="op">=</span> np.sqrt(y.var())
xlim <span class="op">=</span> (<span class="dv">1875</span>,<span class="dv">2030</span>)
ylim <span class="op">=</span> (<span class="fl">2.5</span>, <span class="fl">6.5</span>)
yhat <span class="op">=</span> (y<span class="op">-</span>offset)<span class="op">/</span>scale
fig, ax <span class="op">=</span> plt.subplots(figsize<span class="op">=</span>plot.big_wide_figsize)
_ <span class="op">=</span> ax.plot(x, y, <span class="st">'r.'</span>,markersize<span class="op">=</span><span class="dv">10</span>)
ax.set_xlabel(<span class="st">'year'</span>, fontsize<span class="op">=</span><span class="dv">20</span>)
ax.set_ylabel(<span class="st">'pace min/km'</span>, fontsize<span class="op">=</span><span class="dv">20</span>)
ax.set_xlim(xlim)
ax.set_ylim(ylim)
mlai.write_figure(figure<span class="op">=</span>fig, filename<span class="op">=</span><span class="st">'../slides/diagrams/datasets/olympic-marathon.svg'</span>, transparent<span class="op">=</span><span class="va">True</span>, frameon<span class="op">=</span><span class="va">True</span>)</code></pre></div>
<h3 id="olympic-marathon-data-1">Olympic Marathon Data</h3>
<table>
<tr>
<td width="70%">
<ul>
<li><p>Gold medal times for Olympic Marathon since 1896.</p></li>
<li><p>Marathons before 1924 didn’t have a standardised distance.</p></li>
<li><p>Present results using pace per km.</p></li>
<li>In 1904 Marathon was badly organised leading to very slow times.</li>
</ul>
</td>
<td width="30%">
<img src="../slides/diagrams/Stephen_Kiprotich.jpg" alt="image" /> <small>Image from Wikimedia Commons <a href="http://bit.ly/16kMKHQ" class="uri">http://bit.ly/16kMKHQ</a></small>
</td>
</tr>
</table>
<object class="svgplot" align data="../slides/diagrams/datasets/olympic-marathon.svg">
</object>
<p>Things to notice about the data include the outlier in 1904, in this year, the olympics was in St Louis, USA. Organizational problems and challenges with dust kicked up by the cars following the race meant that participants got lost, and only very few participants completed.</p>
<p>More recent years see more consistently quick marathons.</p>
<p>Our first objective will be to perform a Gaussian process fit to the data, we'll do this using the <a href="https://github.com/SheffieldML/GPy">GPy software</a>.</p>
<div class="sourceCode"><pre class="sourceCode python"><code class="sourceCode python">m_full <span class="op">=</span> GPy.models.GPRegression(x,yhat)
_ <span class="op">=</span> m_full.optimize() <span class="co"># Optimize parameters of covariance function</span></code></pre></div>
<p>The first command sets up the model, then</p>
<pre><code>m_full.optimize()</code></pre>
<p>optimizes the parameters of the covariance function and the noise level of the model. Once the fit is complete, we'll try creating some test points, and computing the output of the GP model in terms of the mean and standard deviation of the posterior functions between 1870 and 2030. We plot the mean function and the standard deviation at 200 locations. We can obtain the predictions using</p>
<pre><code>y_mean, y_var = m_full.predict(xt)</code></pre>
<div class="sourceCode"><pre class="sourceCode python"><code class="sourceCode python">xt <span class="op">=</span> np.linspace(<span class="dv">1870</span>,<span class="dv">2030</span>,<span class="dv">200</span>)[:,np.newaxis]
yt_mean, yt_var <span class="op">=</span> m_full.predict(xt)
yt_sd<span class="op">=</span>np.sqrt(yt_var)</code></pre></div>
<p>Now we plot the results using the helper function in <code>teaching_plots</code>.</p>
<div class="sourceCode"><pre class="sourceCode python"><code class="sourceCode python"><span class="im">import</span> teaching_plots <span class="im">as</span> plot</code></pre></div>
<div class="sourceCode"><pre class="sourceCode python"><code class="sourceCode python">fig, ax <span class="op">=</span> plt.subplots(figsize<span class="op">=</span>plot.big_wide_figsize)
plot.model_output(m_full, scale<span class="op">=</span>scale, offset<span class="op">=</span>offset, ax<span class="op">=</span>ax, xlabel<span class="op">=</span><span class="st">'year'</span>, ylabel<span class="op">=</span><span class="st">'pace min/km'</span>, fontsize<span class="op">=</span><span class="dv">20</span>, portion<span class="op">=</span><span class="fl">0.2</span>)
ax.set_xlim(xlim)
ax.set_ylim(ylim)
mlai.write_figure(figure<span class="op">=</span>fig,
filename<span class="op">=</span><span class="st">'../slides/diagrams/gp/olympic-marathon-gp.svg'</span>,
transparent<span class="op">=</span><span class="va">True</span>, frameon<span class="op">=</span><span class="va">True</span>)</code></pre></div>
<object class="svgplot" align data="../slides/diagrams/gp/olympic-marathon-gp.svg">
</object>
<h3 id="fit-quality">Fit Quality</h3>
<p>In the fit we see that the error bars (coming mainly from the noise variance) are quite large. This is likely due to the outlier point in 1904, ignoring that point we can see that a tighter fit is obtained. To see this making a version of the model, <code>m_clean</code>, where that point is removed.</p>
<pre><code>x_clean=np.vstack((x[0:2, :], x[3:, :]))
y_clean=np.vstack((y[0:2, :], y[3:, :]))
m_clean = GPy.models.GPRegression(x_clean,y_clean)
_ = m_clean.optimize()</code></pre>
<p>Data is fine for answering very specific questions, like "Who won the Olympic Marathon in 2012?", because we have that answer stored, however, we are not given the answer to many other questions. For example, Alan Turing was a formidable marathon runner, in 1946 he ran a time 2 hours 46 minutes (just under four minutes per kilometer, faster than I and most of the other <a href="http://www.parkrun.org.uk/sheffieldhallam/">Endcliffe Park Run</a> runners can do 5 km). What is the probability he would have won an Olympics if one had been held in 1946?</p>
<table>
<tr>
<td width="40%">
<img class="" src="../slides/diagrams/turing-run.jpg" width="40%" align="" style="background:none; border:none; box-shadow:none;">
</td>
<td width="50%">
<img class="" src="../slides/diagrams/turing-times.gif" width="50%" align="" style="background:none; border:none; box-shadow:none;">
</td>
</tr>
</table>
<center>
<em>Alan Turing, in 1946 he was only 11 minutes slower than the winner of the 1948 games. Would he have won a hypothetical games held in 1946? Source: <a href="http://www.turing.org.uk/scrapbook/run.html">Alan Turing Internet Scrapbook</a> </em>
</center>
<h3 id="basis-function-covariance">Basis Function Covariance</h3>
<p>The fixed basis function covariane just comes from the properties of a multivariate Gaussian, if we decide <span class="math display">\[
\mappingFunctionVector=\basisMatrix\mappingVector
\]</span> and then we assume <span class="math display">\[
\mappingVector \sim \gaussianSamp{\zerosVector}{\alpha\eye}
\]</span> then it follows from the properties of a multivariate Gaussian that <span class="math display">\[
\mappingFunctionVector \sim \gaussianSamp{\zerosVector}{\alpha\basisMatrix\basisMatrix^\top}
\]</span> meaning that the vector of observations from the function is jointly distributed as a Gaussian process and the covariance matrix is <span class="math inline">\(\kernelMatrix = \alpha\basisMatrix \basisMatrix^\top\)</span>, each element of the covariance matrix can then be found as the inner product between two rows of the basis funciton matrix. <span class="math display">\[
\kernel(\inputVector, \inputVector^\prime) = \basisVector(\inputVector)^\top \basisVector(\inputVector^\prime)
\]</span></p>
<table>
<tr>
<td width="45%">
<object class="svgplot" align data="../slides/diagrams/kern/basis_covariance.svg">
</object>
</td>
<td width="45%">
<img class="negate" src="../slides/diagrams/kern/basis_covariance.gif" width="40%" align="center" style="background:none; border:none; box-shadow:none;">
</td>
</tr>
</table>
<h3 id="brownian-covariance">Brownian Covariance</h3>
<div class="sourceCode"><pre class="sourceCode python"><code class="sourceCode python"><span class="op">%</span>load <span class="op">-</span>s brownian_cov mlai.py</code></pre></div>
<div class="sourceCode"><pre class="sourceCode python"><code class="sourceCode python"><span class="im">import</span> teaching_plots <span class="im">as</span> plot
<span class="im">import</span> mlai
<span class="im">import</span> numpy <span class="im">as</span> np</code></pre></div>
<div class="sourceCode"><pre class="sourceCode python"><code class="sourceCode python">t<span class="op">=</span>np.linspace(<span class="dv">0</span>, <span class="dv">2</span>, <span class="dv">200</span>)[:, np.newaxis]
K, anim<span class="op">=</span>plot.animate_covariance_function(mlai.compute_kernel,
t,
kernel<span class="op">=</span>brownian_cov)</code></pre></div>
<div class="sourceCode"><pre class="sourceCode python"><code class="sourceCode python"><span class="im">from</span> IPython.core.display <span class="im">import</span> HTML</code></pre></div>
<div class="sourceCode"><pre class="sourceCode python"><code class="sourceCode python">plot.save_animation(anim,
diagrams<span class="op">=</span><span class="st">'../slides/diagrams/kern'</span>,
filename<span class="op">=</span><span class="st">'brownian_covariance.html'</span>)</code></pre></div>
Brownian motion is also a Gaussian process. It follows a Gaussian random walk, with diffusion occuring at each time point driven by a Gaussian input. This implies it is both Markov and Gaussian. The covariane function for Brownian motion has the form <span class="math display">\[
\kernelScalar(t, t^\prime) = \alpha \min(t, t^\prime)
\]</span>
<table>
<tr>
<td width="50%">
<object class="svgplot" align data="../slides/diagrams/kern/brownian_covariance.svg">
</object>
</td>
<td width="50%">
<iframe src="../slides/diagrams/kern/brownian_covariance.html" width="512" height="384" allowtransparency="true" frameborder="0">
</iframe>
</td>
</tr>
</table>
<center>
<em>The covariance of Brownian motion, and some samples from the covariance showing the functional form. </em>
</center>
<h3 id="mlp-covariance">MLP Covariance</h3>
<div class="sourceCode"><pre class="sourceCode python"><code class="sourceCode python"><span class="op">%</span>load <span class="op">-</span>s mlp_cov mlai.py</code></pre></div>
<div class="sourceCode"><pre class="sourceCode python"><code class="sourceCode python"><span class="im">import</span> teaching_plots <span class="im">as</span> plot
<span class="im">import</span> mlai
<span class="im">import</span> numpy <span class="im">as</span> np</code></pre></div>
<div class="sourceCode"><pre class="sourceCode python"><code class="sourceCode python">K, anim<span class="op">=</span>plot.animate_covariance_function(mlai.compute_kernel,
kernel<span class="op">=</span>mlp_cov, lengthscale<span class="op">=</span><span class="fl">0.2</span>)</code></pre></div>
<div class="sourceCode"><pre class="sourceCode python"><code class="sourceCode python"><span class="im">from</span> IPython.core.display <span class="im">import</span> HTML</code></pre></div>
<div class="sourceCode"><pre class="sourceCode python"><code class="sourceCode python">plot.save_animation(anim,
diagrams<span class="op">=</span><span class="st">'../slides/diagrams/kern'</span>,
filename<span class="op">=</span><span class="st">'mlp_covariance.html'</span>)</code></pre></div>
<p>The multi-layer perceptron (MLP) covariance, also known as the neural network covariance or the arcsin covariance, is derived by considering the infinite limit of a neural network. <span class="math display">\[
\kernelScalar(\inputVector, \inputVector^\prime) = \alpha \arcsin\left(\frac{w \inputVector^\top \inputVector^\prime + b}{\sqrt{\left(w \inputVector^\top \inputVector + b + 1\right)\left(w \left.\inputVector^\prime\right.^\top \inputVector^\prime + b + 1\right)}}\right)
\]</span></p>
<table>
<tr>
<td width="50%">
<object class="svgplot" align data="../slides/diagrams/kern/mlp_covariance.svg">
</object>
</td>
<td width="50%">
<iframe src="../slides/diagrams/kern/mlp_covariance.html" width="512" height="384" allowtransparency="true" frameborder="0">
</iframe>
</td>
</tr>
</table>
<center>
<em>The multi-layer perceptron covariance function. This is derived by considering the infinite limit of a neural network with probit activation functions. </em>
</center>
<div style="fontsize:120px;vertical-align:middle;">
<img src="../slides/diagrams/earth_PNG37.png" width="20%" style="display:inline-block;background:none;vertical-align:middle;border:none;box-shadow:none;"><span class="math inline">\(=f\Bigg(\)</span> <img src="../slides/diagrams/Planck_CMB.png" width="50%" style="display:inline-block;background:none;vertical-align:middle;border:none;box-shadow:none;"><span class="math inline">\(\Bigg)\)</span>
</div>
<div style="fontsize:120px;vertical-align:middle;">
<img src="../slides/diagrams/earth_PNG37.png" width="20%" style="display:inline-block;background:none;vertical-align:middle;border:none;box-shadow:none;"><span class="math inline">\(=f\Bigg(\)</span> <img src="../slides/diagrams/Planck_CMB.png" width="50%" style="display:inline-block;background:none;vertical-align:middle;border:none;box-shadow:none;"><span class="math inline">\(\Bigg)\)</span>
</div>
<center>
<em>The cosmic microwave background is, to a very high degree of precision, a Gaussian process. The parameters of its covariance function are given by fundamental parameters of the universe, such as the amount of dark matter and mass. </em>
</center>
<h3 id="deep-gaussian-processes">Deep Gaussian Processes</h3>
<img class="" src="../slides/diagrams/sparse-gps.jpg" width="45%" align="center" style="background:none; border:none; box-shadow:none;">
<center>
<em>Image credit: Kai Arulkumaran </em>
</center>
<div class="sourceCode"><pre class="sourceCode python"><code class="sourceCode python"><span class="im">import</span> numpy <span class="im">as</span> np
<span class="im">import</span> matplotlib.pyplot <span class="im">as</span> plt
<span class="im">from</span> IPython.display <span class="im">import</span> display
<span class="im">import</span> GPy
<span class="im">import</span> mlai
<span class="im">import</span> teaching_plots <span class="im">as</span> plot
<span class="im">from</span> gp_tutorial <span class="im">import</span> gpplot</code></pre></div>
<div class="sourceCode"><pre class="sourceCode python"><code class="sourceCode python">np.random.seed(<span class="dv">101</span>)</code></pre></div>
<h3 id="a-simple-regression-problem">A Simple Regression Problem</h3>
<p>Here we set up a simple one dimensional regression problem. The input locations, <span class="math inline">\(\inputMatrix\)</span>, are in two separate clusters. The response variable, <span class="math inline">\(\dataVector\)</span>, is sampled from a Gaussian process with an exponentiated quadratic covariance.</p>
<div class="sourceCode"><pre class="sourceCode python"><code class="sourceCode python">N <span class="op">=</span> <span class="dv">50</span>
noise_var <span class="op">=</span> <span class="fl">0.01</span>
X <span class="op">=</span> np.zeros((<span class="dv">50</span>, <span class="dv">1</span>))
X[:<span class="dv">25</span>, :] <span class="op">=</span> np.linspace(<span class="dv">0</span>,<span class="dv">3</span>,<span class="dv">25</span>)[:,<span class="va">None</span>] <span class="co"># First cluster of inputs/covariates</span>
X[<span class="dv">25</span>:, :] <span class="op">=</span> np.linspace(<span class="dv">7</span>,<span class="dv">10</span>,<span class="dv">25</span>)[:,<span class="va">None</span>] <span class="co"># Second cluster of inputs/covariates</span>
<span class="co"># Sample response variables from a Gaussian process with exponentiated quadratic covariance.</span>
k <span class="op">=</span> GPy.kern.RBF(<span class="dv">1</span>)
y <span class="op">=</span> np.random.multivariate_normal(np.zeros(N),k.K(X)<span class="op">+</span>np.eye(N)<span class="op">*</span>np.sqrt(noise_var)).reshape(<span class="op">-</span><span class="dv">1</span>,<span class="dv">1</span>)</code></pre></div>
<p>First we perform a full Gaussian process regression on the data. We create a GP model, <code>m_full</code>, and fit it to the data, plotting the resulting fit.</p>
<div class="sourceCode"><pre class="sourceCode python"><code class="sourceCode python">m_full <span class="op">=</span> GPy.models.GPRegression(X,y)
_ <span class="op">=</span> m_full.optimize(messages<span class="op">=</span><span class="va">True</span>) <span class="co"># Optimize parameters of covariance function</span></code></pre></div>
<div class="sourceCode"><pre class="sourceCode python"><code class="sourceCode python">fig, ax <span class="op">=</span> plt.subplots(figsize<span class="op">=</span>plot.big_wide_figsize)
plot.model_output(m_full, ax<span class="op">=</span>ax, xlabel<span class="op">=</span><span class="st">'$x$'</span>, ylabel<span class="op">=</span><span class="st">'$y$'</span>, fontsize<span class="op">=</span><span class="dv">20</span>, portion<span class="op">=</span><span class="fl">0.2</span>)
xlim <span class="op">=</span> ax.get_xlim()
ylim <span class="op">=</span> ax.get_ylim()
mlai.write_figure(figure<span class="op">=</span>fig,
filename<span class="op">=</span><span class="st">'../slides/diagrams/gp/sparse-demo-full-gp.svg'</span>,
transparent<span class="op">=</span><span class="va">True</span>, frameon<span class="op">=</span><span class="va">True</span>)</code></pre></div>
<object class="svgplot" align data="../slides/diagrams/gp/sparse-demo-full-gp.svg">
</object>
<center>
<em>Full Gaussian process fitted to the data set. </em>
</center>
<p>Now we set up the inducing variables, <span class="math inline">\(\mathbf{u}\)</span>. Each inducing variable has its own associated input index, <span class="math inline">\(\mathbf{Z}\)</span>, which lives in the same space as <span class="math inline">\(\inputMatrix\)</span>. Here we are using the true covariance function parameters to generate the fit.</p>
<div class="sourceCode"><pre class="sourceCode python"><code class="sourceCode python">kern <span class="op">=</span> GPy.kern.RBF(<span class="dv">1</span>)
Z <span class="op">=</span> np.hstack(
(np.linspace(<span class="fl">2.5</span>,<span class="dv">4</span>.,<span class="dv">3</span>),
np.linspace(<span class="dv">7</span>,<span class="fl">8.5</span>,<span class="dv">3</span>)))[:,<span class="va">None</span>]
m <span class="op">=</span> GPy.models.SparseGPRegression(X,y,kernel<span class="op">=</span>kern,Z<span class="op">=</span>Z)
m.noise_var <span class="op">=</span> noise_var
m.inducing_inputs.constrain_fixed()
display(m)</code></pre></div>
<div class="sourceCode"><pre class="sourceCode python"><code class="sourceCode python">fig, ax <span class="op">=</span> plt.subplots(figsize<span class="op">=</span>plot.big_wide_figsize)
plot.model_output(m, ax<span class="op">=</span>ax, xlabel<span class="op">=</span><span class="st">'$x$'</span>, ylabel<span class="op">=</span><span class="st">'$y$'</span>, fontsize<span class="op">=</span><span class="dv">20</span>, portion<span class="op">=</span><span class="fl">0.2</span>, xlim<span class="op">=</span>xlim, ylim<span class="op">=</span>ylim)
mlai.write_figure(figure<span class="op">=</span>fig,
filename<span class="op">=</span><span class="st">'../slides/diagrams/gp/sparse-demo-constrained-inducing-6-unlearned-gp.svg'</span>,
transparent<span class="op">=</span><span class="va">True</span>, frameon<span class="op">=</span><span class="va">True</span>)</code></pre></div>
<object class="svgplot" align data="../slides/diagrams/gp/sparse-demo-constrained-inducing-6-unlearned-gp.svg">
</object>
<center>
<em>Sparse Gaussian process fitted with six inducing variables, no optimization of parameters or inducing variables. </em>
</center>
<div class="sourceCode"><pre class="sourceCode python"><code class="sourceCode python">_ <span class="op">=</span> m.optimize(messages<span class="op">=</span><span class="va">True</span>)
display(m)</code></pre></div>
<div class="sourceCode"><pre class="sourceCode python"><code class="sourceCode python">fig, ax <span class="op">=</span> plt.subplots(figsize<span class="op">=</span>plot.big_wide_figsize)
plot.model_output(m, ax<span class="op">=</span>ax, xlabel<span class="op">=</span><span class="st">'$x$'</span>, ylabel<span class="op">=</span><span class="st">'$y$'</span>, fontsize<span class="op">=</span><span class="dv">20</span>, portion<span class="op">=</span><span class="fl">0.2</span>, xlim<span class="op">=</span>xlim, ylim<span class="op">=</span>ylim)
mlai.write_figure(figure<span class="op">=</span>fig,
filename<span class="op">=</span><span class="st">'../slides/diagrams/gp/sparse-demo-full-gp.svg'</span>,
transparent<span class="op">=</span><span class="va">True</span>, frameon<span class="op">=</span><span class="va">True</span>)</code></pre></div>
<object class="svgplot" align data="../slides/diagrams/gp/sparse-demo-constrained-inducing-6-learned-gp.svg">
</object>
<center>
<em>Gaussian process fitted with inducing variables fixed and parameters optimized </em>
</center>
<div class="sourceCode"><pre class="sourceCode python"><code class="sourceCode python">m.randomize()
m.inducing_inputs.unconstrain()
_ <span class="op">=</span> m.optimize(messages<span class="op">=</span><span class="va">True</span>)</code></pre></div>
<div class="sourceCode"><pre class="sourceCode python"><code class="sourceCode python">fig, ax <span class="op">=</span> plt.subplots(figsize<span class="op">=</span>plot.big_wide_figsize)
plot.model_output(m, ax<span class="op">=</span>ax, xlabel<span class="op">=</span><span class="st">'$x$'</span>, ylabel<span class="op">=</span><span class="st">'$y$'</span>, fontsize<span class="op">=</span><span class="dv">20</span>, portion<span class="op">=</span><span class="fl">0.2</span>,xlim<span class="op">=</span>xlim, ylim<span class="op">=</span>ylim)
mlai.write_figure(figure<span class="op">=</span>fig,
filename<span class="op">=</span><span class="st">'../slides/diagrams/gp/sparse-demo-unconstrained-inducing-6-gp.svg'</span>,
transparent<span class="op">=</span><span class="va">True</span>, frameon<span class="op">=</span><span class="va">True</span>)</code></pre></div>
<object class="svgplot" align data="../slides/diagrams/gp/sparse-demo-unconstrained-inducing-6-gp.svg">
</object>
<center>
<em>Gaussian process fitted with location of inducing variables and parameters both optimized </em>
</center>
<p>Now we will vary the number of inducing points used to form the approximation.</p>
<div class="sourceCode"><pre class="sourceCode python"><code class="sourceCode python">m.num_inducing<span class="op">=</span><span class="dv">8</span>
m.randomize()
M <span class="op">=</span> <span class="dv">8</span>
m.set_Z(np.random.rand(M,<span class="dv">1</span>)<span class="op">*</span><span class="dv">12</span>)
_ <span class="op">=</span> m.optimize(messages<span class="op">=</span><span class="va">True</span>)</code></pre></div>
<div class="sourceCode"><pre class="sourceCode python"><code class="sourceCode python">fig, ax <span class="op">=</span> plt.subplots(figsize<span class="op">=</span>plot.big_wide_figsize)
plot.model_output(m, ax<span class="op">=</span>ax, xlabel<span class="op">=</span><span class="st">'$x$'</span>, ylabel<span class="op">=</span><span class="st">'$y$'</span>, fontsize<span class="op">=</span><span class="dv">20</span>, portion<span class="op">=</span><span class="fl">0.2</span>, xlim<span class="op">=</span>xlim, ylim<span class="op">=</span>ylim)
mlai.write_figure(figure<span class="op">=</span>fig,
filename<span class="op">=</span><span class="st">'../slides/diagrams/gp/sparse-demo-sparse-inducing-8-gp.svg'</span>,
transparent<span class="op">=</span><span class="va">True</span>, frameon<span class="op">=</span><span class="va">True</span>)</code></pre></div>
<object class="svgplot" align data="../slides/diagrams/gp/sparse-demo-sparse-inducing-8-gp.svg">
</object>
<object class="svgplot" align data="../slides/diagrams/gp/sparse-demo-full-gp.svg">
</object>
<center>
<em>Comparison of the full Gaussian process fit with a sparse Gaussian process using eight inducing varibles. Both inducing variables and parameters are optimized. </em>
</center>
<p>And we can compare the probability of the result to the full model.</p>
<div class="sourceCode"><pre class="sourceCode python"><code class="sourceCode python"><span class="bu">print</span>(m.log_likelihood(), m_full.log_likelihood())</code></pre></div>
<h3 id="modern-review">Modern Review</h3>
<ul>
<li><p><em>A Unifying Framework for Gaussian Process Pseudo-Point Approximations using Power Expectation Propagation</em> <span class="citation">Bui et al. (2017)</span></p></li>
<li><p><em>Deep Gaussian Processes and Variational Propagation of Uncertainty</em> <span class="citation">Damianou (2015)</span></p></li>
</ul>
<div class="sourceCode"><pre class="sourceCode python"><code class="sourceCode python"><span class="im">import</span> teaching_plots <span class="im">as</span> plot</code></pre></div>
<div class="sourceCode"><pre class="sourceCode python"><code class="sourceCode python">plot.deep_nn(diagrams<span class="op">=</span><span class="st">'../slides/diagrams/deepgp/'</span>)</code></pre></div>
<object class="svgplot" align data="../slides/diagrams/deepgp/deep-nn2.svg">
</object>
<center>
<em>A deep neural network. Input nodes are shown at the bottom. Each hidden layer is the result of applying an affine transformation to the previous layer and placing through an activation function. </em>
</center>
<p>Mathematically, each layer of a neural network is given through computing the activation function, <span class="math inline">\(\basisFunction(\cdot)\)</span>, contingent on the previous layer, or the inputs. In this way the activation functions, are composed to generate more complex interactions than would be possible with any single layer. <span class="math display">\[
\begin{align}
\hiddenVector_{1} &= \basisFunction\left(\mappingMatrix_1 \inputVector\right)\\
\hiddenVector_{2} &= \basisFunction\left(\mappingMatrix_2\hiddenVector_{1}\right)\\
\hiddenVector_{3} &= \basisFunction\left(\mappingMatrix_3 \hiddenVector_{2}\right)\\
\dataVector &= \mappingVector_4 ^\top\hiddenVector_{3}
\end{align}
\]</span></p>
<h3 id="overfitting">Overfitting</h3>
<p>One potential problem is that as the number of nodes in two adjacent layers increases, the number of parameters in the affine transformation between layers, <span class="math inline">\(\mappingMatrix\)</span>, increases. If there are <span class="math inline">\(k_{i-1}\)</span> nodes in one layer, and <span class="math inline">\(k_i\)</span> nodes in the following, then that matrix contains <span class="math inline">\(k_i k_{i-1}\)</span> parameters, when we have layer widths in the 1000s that leads to millions of parameters.</p>
<p>One proposed solution is known as <em>dropout</em> where only a sub-set of the neural network is trained at each iteration. An alternative solution would be to reparameterize <span class="math inline">\(\mappingMatrix\)</span> with its <em>singular value decomposition</em>. <span class="math display">\[
\mappingMatrix = \eigenvectorMatrix\eigenvalueMatrix\eigenvectwoMatrix^\top
\]</span> or <span class="math display">\[
\mappingMatrix = \eigenvectorMatrix\eigenvectwoMatrix^\top
\]</span> where if <span class="math inline">\(\mappingMatrix \in \Re^{k_1\times k_2}\)</span> then <span class="math inline">\(\eigenvectorMatrix\in \Re^{k_1\times q}\)</span> and <span class="math inline">\(\eigenvectwoMatrix \in \Re^{k_2\times q}\)</span>, i.e. we have a low rank matrix factorization for the weights.</p>
<div class="sourceCode"><pre class="sourceCode python"><code class="sourceCode python"><span class="im">import</span> teaching_plots <span class="im">as</span> plot</code></pre></div>
<div class="sourceCode"><pre class="sourceCode python"><code class="sourceCode python">plot.low_rank_approximation(diagrams<span class="op">=</span><span class="st">'../slides/diagrams'</span>)</code></pre></div>
<object class="svgplot" align data="../slides/diagrams/wisuvt.svg">
</object>
<center>
<em>Pictorial representation of the low rank form of the matrix <span class="math inline">\(\mappingMatrix\)</span> </em>
</center>
<div class="sourceCode"><pre class="sourceCode python"><code class="sourceCode python"><span class="im">import</span> teaching_plots <span class="im">as</span> plot</code></pre></div>
<div class="sourceCode"><pre class="sourceCode python"><code class="sourceCode python">plot.deep_nn_bottleneck(diagrams<span class="op">=</span><span class="st">'../slides/diagrams/deepgp'</span>)</code></pre></div>
<object class="svgplot" align data="../slides/diagrams/deepgp/deep-nn-bottleneck2.svg">
</object>
<p>Including the low rank decomposition of <span class="math inline">\(\mappingMatrix\)</span> in the neural network, we obtain a new mathematical form. Effectively, we are adding additional <em>latent</em> layers, <span class="math inline">\(\latentVector\)</span>, in between each of the existing hidden layers. In a neural network these are sometimes known as <em>bottleneck</em> layers. The network can now be written mathematically as <span class="math display">\[
\begin{align}
\latentVector_{1} &= \eigenvectwoMatrix^\top_1 \inputVector\\
\hiddenVector_{1} &= \basisFunction\left(\eigenvectorMatrix_1 \latentVector_{1}\right)\\
\latentVector_{2} &= \eigenvectwoMatrix^\top_2 \hiddenVector_{1}\\
\hiddenVector_{2} &= \basisFunction\left(\eigenvectorMatrix_2 \latentVector_{2}\right)\\
\latentVector_{3} &= \eigenvectwoMatrix^\top_3 \hiddenVector_{2}\\
\hiddenVector_{3} &= \basisFunction\left(\eigenvectorMatrix_3 \latentVector_{3}\right)\\
\dataVector &= \mappingVector_4^\top\hiddenVector_{3}.
\end{align}
\]</span></p>
<h3 id="a-cascade-of-neural-networks">A Cascade of Neural Networks</h3>
<p><span class="math display">\[
\begin{align}
\latentVector_{1} &= \eigenvectwoMatrix^\top_1 \inputVector\\
\latentVector_{2} &= \eigenvectwoMatrix^\top_2 \basisFunction\left(\eigenvectorMatrix_1 \latentVector_{1}\right)\\
\latentVector_{3} &= \eigenvectwoMatrix^\top_3 \basisFunction\left(\eigenvectorMatrix_2 \latentVector_{2}\right)\\
\dataVector &= \mappingVector_4 ^\top \latentVector_{3}
\end{align}
\]</span></p>
<h3 id="cascade-of-gaussian-processes">Cascade of Gaussian Processes</h3>
<ul>
<li><p>Replace each neural network with a Gaussian process <span class="math display">\[
\begin{align}
\latentVector_{1} &= \mappingFunctionVector_1\left(\inputVector\right)\\
\latentVector_{2} &= \mappingFunctionVector_2\left(\latentVector_{1}\right)\\
\latentVector_{3} &= \mappingFunctionVector_3\left(\latentVector_{2}\right)\\
\dataVector &= \mappingFunctionVector_4\left(\latentVector_{3}\right)
\end{align}
\]</span></p></li>
<li><p>Equivalent to prior over parameters, take width of each layer to infinity.</p></li>
</ul>
<p>Mathematically, a deep Gaussian process can be seen as a composite <em>multivariate</em> function, <span class="math display">\[
\mathbf{g}(\inputVector)=\mappingFunctionVector_5(\mappingFunctionVector_4(\mappingFunctionVector_3(\mappingFunctionVector_2(\mappingFunctionVector_1(\inputVector))))).
\]</span> Or if we view it from the probabilistic perspective we can see that a deep Gaussian process is specifying a factorization of the joint density, the standard deep model takes the form of a Markov chain.</p>
<div class="sourceCode"><pre class="sourceCode python"><code class="sourceCode python"><span class="im">from</span> matplotlib <span class="im">import</span> rc
rc(<span class="st">"font"</span>, <span class="op">**</span>{<span class="st">'family'</span>:<span class="st">'sans-serif'</span>,<span class="st">'sans-serif'</span>:[<span class="st">'Helvetica'</span>],<span class="st">'size'</span>:<span class="dv">30</span>})
rc(<span class="st">"text"</span>, usetex<span class="op">=</span><span class="va">True</span>)</code></pre></div>
<div class="sourceCode"><pre class="sourceCode python"><code class="sourceCode python">pgm <span class="op">=</span> plot.horizontal_chain(depth<span class="op">=</span><span class="dv">5</span>)
pgm.render().figure.savefig(<span class="st">"../slides/diagrams/deepgp/deep-markov.svg"</span>, transparent<span class="op">=</span><span class="va">True</span>)</code></pre></div>
<p><span class="math display">\[
p(\dataVector|\inputVector)= p(\dataVector|\mappingFunctionVector_5)p(\mappingFunctionVector_5|\mappingFunctionVector_4)p(\mappingFunctionVector_4|\mappingFunctionVector_3)p(\mappingFunctionVector_3|\mappingFunctionVector_2)p(\mappingFunctionVector_2|\mappingFunctionVector_1)p(\mappingFunctionVector_1|\inputVector)
\]</span></p>
<object class="svgplot" align data="../slides/diagrams/deepgp/deep-markov.svg">
</object>
<center>
<em>Probabilistically the deep Gaussian process can be represented as a Markov chain. </em>
</center>
<div class="sourceCode"><pre class="sourceCode python"><code class="sourceCode python"><span class="im">from</span> matplotlib <span class="im">import</span> rc
rc(<span class="st">"font"</span>, <span class="op">**</span>{<span class="st">'family'</span>:<span class="st">'sans-serif'</span>,<span class="st">'sans-serif'</span>:[<span class="st">'Helvetica'</span>], <span class="st">'size'</span>:<span class="dv">15</span>})
rc(<span class="st">"text"</span>, usetex<span class="op">=</span><span class="va">True</span>)</code></pre></div>
<div class="sourceCode"><pre class="sourceCode python"><code class="sourceCode python">pgm <span class="op">=</span> plot.vertical_chain(depth<span class="op">=</span><span class="dv">5</span>)
pgm.render().figure.savefig(<span class="st">"../slides/diagrams/deepgp/deep-markov-vertical.svg"</span>, transparent<span class="op">=</span><span class="va">True</span>)</code></pre></div>
<object class="svgplot" align data="../slides/diagrams/deepgp/deep-markov-vertical.svg">
</object>
<h3 id="why-deep">Why Deep?</h3>
<p>If the result of composing many functions together is simply another function, then why do we bother? The key point is that we can change the class of functions we are modeling by composing in this manner. A Gaussian process is specifying a prior over functions, and one with a number of elegant properties. For example, the derivative process (if it exists) of a Gaussian process is also Gaussian distributed. That makes it easy to assimilate, for example, derivative observations. But that also might raise some alarm bells. That implies that the <em>marginal derivative distribution</em> is also Gaussian distributed. If that's the case, then it means that functions which occasionally exhibit very large derivatives are hard to model with a Gaussian process. For example, a function with jumps in.</p>
<p>A one off discontinuity is easy to model with a Gaussian process, or even multiple discontinuities. They can be introduced in the mean function, or independence can be forced between two covariance functions that apply in different areas of the input space. But in these cases we will need to specify the number of discontinuities and where they occur. In otherwords we need to <em>parameterise</em> the discontinuities. If we do not know the number of discontinuities and don't wish to specify where they occur, i.e. if we want a non-parametric representation of discontinuities, then the standard Gaussian process doesn't help.</p>
<h3 id="stochastic-process-composition">Stochastic Process Composition</h3>
<p>The deep Gaussian process leads to <em>non-Gaussian</em> models, and non-Gaussian characteristics in the covariance function. In effect, what we are proposing is that we change the properties of the functions we are considering by *composing stochastic processes$. This is an approach to creating new stochastic processes from well known processes.</p>
<object class="svgplot" align data="../slides/diagrams/deepgp/deep-markov-vertical.svg">
</object>
<div class="sourceCode"><pre class="sourceCode python"><code class="sourceCode python">pgm <span class="op">=</span> plot.vertical_chain(depth<span class="op">=</span><span class="dv">5</span>, shape<span class="op">=</span>[<span class="dv">2</span>, <span class="dv">7</span>])
pgm.add_node(daft.Node(<span class="st">'y_2'</span>, <span class="vs">r'$\mathbf</span><span class="sc">{y}</span><span class="vs">_2$'</span>, <span class="fl">1.5</span>, <span class="fl">3.5</span>, observed<span class="op">=</span><span class="va">True</span>))
pgm.add_edge(<span class="st">'f_2'</span>, <span class="st">'y_2'</span>)
pgm.render().figure.savefig(<span class="st">"../slides/diagrams/deepgp/deep-markov-vertical-side.svg"</span>, transparent<span class="op">=</span><span class="va">True</span>)</code></pre></div>
<p>Additionally, we are not constrained to the formalism of the chain. For example, we can easily add single nodes emerging from some point in the depth of the chain. This allows us to combine the benefits of the graphical modelling formalism, but with a powerful framework for relating one set of variables to another, that of Gaussian processes <object class="svgplot" align="" data="../slides/diagrams/deepgp/deep-markov-vertical-side.svg"></object></p>
<div class="sourceCode"><pre class="sourceCode python"><code class="sourceCode python">plot.non_linear_difficulty_plot_3(diagrams<span class="op">=</span><span class="st">'../../slides/diagrams/dimred/'</span>)</code></pre></div>
<h3 id="difficulty-for-probabilistic-approaches" data-transition="None">Difficulty for Probabilistic Approaches</h3>
<ul>
<li><p>Propagate a probability distribution through a non-linear mapping.</p></li>
<li><p>Normalisation of distribution becomes intractable.</p></li>
</ul>
<object class="svgplot" align="center" data="../slides/diagrams/dimred/nonlinear-mapping-3d-plot.svg">
</object>
<div class="sourceCode"><pre class="sourceCode python"><code class="sourceCode python">plot.non_linear_difficulty_plot_2(diagrams<span class="op">=</span><span class="st">'../../slides/diagrams/dimred/'</span>)</code></pre></div>
<h3 id="difficulty-for-probabilistic-approaches-1" data-transition="None">Difficulty for Probabilistic Approaches</h3>
<ul>
<li><p>Propagate a probability distribution through a non-linear mapping.</p></li>
<li><p>Normalisation of distribution becomes intractable.</p></li>
</ul>
<object class="svgplot" align="center" data="../slides/diagrams/dimred/nonlinear-mapping-2d-plot.svg">
</object>
<div class="sourceCode"><pre class="sourceCode python"><code class="sourceCode python">plot.non_linear_difficulty_plot_1(diagrams<span class="op">=</span><span class="st">'../../slides/diagrams/dimred'</span>)</code></pre></div>
<h3 id="difficulty-for-probabilistic-approaches-2" data-transition="None">Difficulty for Probabilistic Approaches</h3>
<ul>
<li><p>Propagate a probability distribution through a non-linear mapping.</p></li>
<li><p>Normalisation of distribution becomes intractable.</p></li>
</ul>
<object class="svgplot" align="center" data="../slides/diagrams/dimred/gaussian-through-nonlinear.svg">
</object>
<h3 id="deep-gaussian-processes-1">Deep Gaussian Processes</h3>
<ul>
<li><p>Deep architectures allow abstraction of features <span class="citation">(Bengio, 2009; Hinton and Osindero, 2006; Salakhutdinov and Murray, n.d.)</span></p></li>
<li><p>We use variational approach to stack GP models.</p></li>
</ul>
<div class="sourceCode"><pre class="sourceCode python"><code class="sourceCode python">plot.stack_gp_sample(kernel<span class="op">=</span>GPy.kern.Linear,
diagrams<span class="op">=</span><span class="st">"../../slides/diagrams/deepgp"</span>)</code></pre></div>
<h3 id="stacked-pca">Stacked PCA</h3>
<object class="svgplot" align data="../slides/diagrams/stack-pca-sample-4.svg">
</object>
<div class="sourceCode"><pre class="sourceCode python"><code class="sourceCode python">plot.stack_gp_sample(kernel<span class="op">=</span>GPy.kern.RBF,
diagrams<span class="op">=</span><span class="st">"../../slides/diagrams/deepgp"</span>)</code></pre></div>
<h3 id="stacked-gp">Stacked GP</h3>
<object class="svgplot" align data="../slides/diagrams/stack-gp-sample-4.svg">
</object>
<h3 id="analysis-of-deep-gps">Analysis of Deep GPs</h3>
<ul>
<li><p><em>Avoiding pathologies in very deep networks</em> <span class="citation">Duvenaud et al. (2014)</span> show that the derivative distribution of the process becomes more <em>heavy tailed</em> as number of layers increase.</p></li>
<li><p><em>How Deep Are Deep Gaussian Processes?</em> <span class="citation">Dunlop et al. (2017)</span> perform a theoretical analysis possible through conditional Gaussian Markov property.</p></li>
</ul>
<p><a href="https://www.youtube.com/watch?v=XhIvygQYFFQ&t="><img src="https://img.youtube.com/vi/XhIvygQYFFQ/0.jpg" /></a></p>
<div class="sourceCode"><pre class="sourceCode python"><code class="sourceCode python"><span class="im">import</span> numpy <span class="im">as</span> np
<span class="im">import</span> matplotlib.pyplot <span class="im">as</span> plt
<span class="im">import</span> pods
<span class="im">import</span> teaching_plots <span class="im">as</span> plot
<span class="im">import</span> mlai</code></pre></div>
<h3 id="olympic-marathon-data-2">Olympic Marathon Data</h3>
<p>The first thing we will do is load a standard data set for regression modelling. The data consists of the pace of Olympic Gold Medal Marathon winners for the Olympics from 1896 to present. First we load in the data and plot.</p>
<div class="sourceCode"><pre class="sourceCode python"><code class="sourceCode python">data <span class="op">=</span> pods.datasets.olympic_marathon_men()
x <span class="op">=</span> data[<span class="st">'X'</span>]
y <span class="op">=</span> data[<span class="st">'Y'</span>]
offset <span class="op">=</span> y.mean()
scale <span class="op">=</span> np.sqrt(y.var())
xlim <span class="op">=</span> (<span class="dv">1875</span>,<span class="dv">2030</span>)
ylim <span class="op">=</span> (<span class="fl">2.5</span>, <span class="fl">6.5</span>)
yhat <span class="op">=</span> (y<span class="op">-</span>offset)<span class="op">/</span>scale
fig, ax <span class="op">=</span> plt.subplots(figsize<span class="op">=</span>plot.big_wide_figsize)
_ <span class="op">=</span> ax.plot(x, y, <span class="st">'r.'</span>,markersize<span class="op">=</span><span class="dv">10</span>)
ax.set_xlabel(<span class="st">'year'</span>, fontsize<span class="op">=</span><span class="dv">20</span>)
ax.set_ylabel(<span class="st">'pace min/km'</span>, fontsize<span class="op">=</span><span class="dv">20</span>)
ax.set_xlim(xlim)
ax.set_ylim(ylim)
mlai.write_figure(figure<span class="op">=</span>fig, filename<span class="op">=</span><span class="st">'../slides/diagrams/datasets/olympic-marathon.svg'</span>, transparent<span class="op">=</span><span class="va">True</span>, frameon<span class="op">=</span><span class="va">True</span>)</code></pre></div>
<h3 id="olympic-marathon-data-3">Olympic Marathon Data</h3>
<table>
<tr>
<td width="70%">
<ul>
<li><p>Gold medal times for Olympic Marathon since 1896.</p></li>
<li><p>Marathons before 1924 didn’t have a standardised distance.</p></li>
<li><p>Present results using pace per km.</p></li>
<li>In 1904 Marathon was badly organised leading to very slow times.</li>
</ul>
</td>
<td width="30%">
<img src="../slides/diagrams/Stephen_Kiprotich.jpg" alt="image" /> <small>Image from Wikimedia Commons <a href="http://bit.ly/16kMKHQ" class="uri">http://bit.ly/16kMKHQ</a></small>
</td>
</tr>
</table>
<object class="svgplot" align data="../slides/diagrams/datasets/olympic-marathon.svg">
</object>
<p>Things to notice about the data include the outlier in 1904, in this year, the olympics was in St Louis, USA. Organizational problems and challenges with dust kicked up by the cars following the race meant that participants got lost, and only very few participants completed.</p>
<p>More recent years see more consistently quick marathons.</p>
<p>Our first objective will be to perform a Gaussian process fit to the data, we'll do this using the <a href="https://github.com/SheffieldML/GPy">GPy software</a>.</p>
<div class="sourceCode"><pre class="sourceCode python"><code class="sourceCode python">m_full <span class="op">=</span> GPy.models.GPRegression(x,yhat)
_ <span class="op">=</span> m_full.optimize() <span class="co"># Optimize parameters of covariance function</span></code></pre></div>
<p>The first command sets up the model, then</p>
<pre><code>m_full.optimize()</code></pre>
<p>optimizes the parameters of the covariance function and the noise level of the model. Once the fit is complete, we'll try creating some test points, and computing the output of the GP model in terms of the mean and standard deviation of the posterior functions between 1870 and 2030. We plot the mean function and the standard deviation at 200 locations. We can obtain the predictions using</p>
<pre><code>y_mean, y_var = m_full.predict(xt)</code></pre>
<div class="sourceCode"><pre class="sourceCode python"><code class="sourceCode python">xt <span class="op">=</span> np.linspace(<span class="dv">1870</span>,<span class="dv">2030</span>,<span class="dv">200</span>)[:,np.newaxis]
yt_mean, yt_var <span class="op">=</span> m_full.predict(xt)
yt_sd<span class="op">=</span>np.sqrt(yt_var)</code></pre></div>
<p>Now we plot the results using the helper function in <code>teaching_plots</code>.</p>
<div class="sourceCode"><pre class="sourceCode python"><code class="sourceCode python"><span class="im">import</span> teaching_plots <span class="im">as</span> plot</code></pre></div>
<div class="sourceCode"><pre class="sourceCode python"><code class="sourceCode python">fig, ax <span class="op">=</span> plt.subplots(figsize<span class="op">=</span>plot.big_wide_figsize)
plot.model_output(m_full, scale<span class="op">=</span>scale, offset<span class="op">=</span>offset, ax<span class="op">=</span>ax, xlabel<span class="op">=</span><span class="st">'year'</span>, ylabel<span class="op">=</span><span class="st">'pace min/km'</span>, fontsize<span class="op">=</span><span class="dv">20</span>, portion<span class="op">=</span><span class="fl">0.2</span>)
ax.set_xlim(xlim)
ax.set_ylim(ylim)
mlai.write_figure(figure<span class="op">=</span>fig,
filename<span class="op">=</span><span class="st">'../slides/diagrams/gp/olympic-marathon-gp.svg'</span>,
transparent<span class="op">=</span><span class="va">True</span>, frameon<span class="op">=</span><span class="va">True</span>)</code></pre></div>
<object class="svgplot" align data="../slides/diagrams/gp/olympic-marathon-gp.svg">
</object>
<h3 id="fit-quality-1">Fit Quality</h3>
<p>In the fit we see that the error bars (coming mainly from the noise variance) are quite large. This is likely due to the outlier point in 1904, ignoring that point we can see that a tighter fit is obtained. To see this making a version of the model, <code>m_clean</code>, where that point is removed.</p>
<pre><code>x_clean=np.vstack((x[0:2, :], x[3:, :]))
y_clean=np.vstack((y[0:2, :], y[3:, :]))
m_clean = GPy.models.GPRegression(x_clean,y_clean)
_ = m_clean.optimize()</code></pre>
<p>Data is fine for answering very specific questions, like "Who won the Olympic Marathon in 2012?", because we have that answer stored, however, we are not given the answer to many other questions. For example, Alan Turing was a formidable marathon runner, in 1946 he ran a time 2 hours 46 minutes (just under four minutes per kilometer, faster than I and most of the other <a href="http://www.parkrun.org.uk/sheffieldhallam/">Endcliffe Park Run</a> runners can do 5 km). What is the probability he would have won an Olympics if one had been held in 1946?</p>
<table>
<tr>
<td width="40%">
<img class="" src="../slides/diagrams/turing-run.jpg" width="40%" align="" style="background:none; border:none; box-shadow:none;">
</td>
<td width="50%">
<img class="" src="../slides/diagrams/turing-times.gif" width="50%" align="" style="background:none; border:none; box-shadow:none;">
</td>
</tr>
</table>
<center>
<em>Alan Turing, in 1946 he was only 11 minutes slower than the winner of the 1948 games. Would he have won a hypothetical games held in 1946? Source: <a href="http://www.turing.org.uk/scrapbook/run.html">Alan Turing Internet Scrapbook</a> </em>
</center>
<h3 id="deep-gp-fit">Deep GP Fit</h3>
<p>Let's see if a deep Gaussian process can help here. We will construct a deep Gaussian process with one hidden layer (i.e. one Gaussian process feeding into another).</p>
<p>Build a Deep GP with an additional hidden layer (one dimensional) to fit the model.</p>
<div class="sourceCode"><pre class="sourceCode python"><code class="sourceCode python">hidden <span class="op">=</span> <span class="dv">1</span>
m <span class="op">=</span> deepgp.DeepGP([y.shape[<span class="dv">1</span>],hidden,x.shape[<span class="dv">1</span>]],Y<span class="op">=</span>yhat, X<span class="op">=</span>x, inits<span class="op">=</span>[<span class="st">'PCA'</span>,<span class="st">'PCA'</span>],
kernels<span class="op">=</span>[GPy.kern.RBF(hidden,ARD<span class="op">=</span><span class="va">True</span>),
GPy.kern.RBF(x.shape[<span class="dv">1</span>],ARD<span class="op">=</span><span class="va">True</span>)], <span class="co"># the kernels for each layer</span>
num_inducing<span class="op">=</span><span class="dv">50</span>, back_constraint<span class="op">=</span><span class="va">False</span>)</code></pre></div>
<p>Deep Gaussian process models also can require some thought in initialization. Here we choose to start by setting the noise variance to be one percent of the data variance.</p>
<p>Optimization requires moving variational parameters in the hidden layer representing the mean and variance of the expected values in that layer. Since all those values can be scaled up, and this only results in a downscaling in the output of the first GP, and a downscaling of the input length scale to the second GP. It makes sense to first of all fix the scales of the covariance function in each of the GPs.</p>
<p>Sometimes, deep Gaussian processes can find a local minima which involves increasing the noise level of one or more of the GPs. This often occurs because it allows a minimum in the KL divergence term in the lower bound on the likelihood. To avoid this minimum we habitually train with the likelihood variance (the noise on the output of the GP) fixed to some lower value for some iterations.</p>
<p>Let's create a helper function to initialize the models we use in the notebook.</p>
<div class="sourceCode"><pre class="sourceCode python"><code class="sourceCode python"><span class="kw">def</span> initialize(<span class="va">self</span>, noise_factor<span class="op">=</span><span class="fl">0.01</span>, linear_factor<span class="op">=</span><span class="dv">1</span>):
<span class="co">"""Helper function for deep model initialization."""</span>
<span class="va">self</span>.obslayer.likelihood.variance <span class="op">=</span> <span class="va">self</span>.Y.var()<span class="op">*</span>noise_factor
<span class="cf">for</span> layer <span class="kw">in</span> <span class="va">self</span>.layers:
<span class="cf">if</span> <span class="bu">type</span>(layer.X) <span class="kw">is</span> GPy.core.parameterization.variational.NormalPosterior:
<span class="cf">if</span> layer.kern.ARD:
var <span class="op">=</span> layer.X.mean.var(<span class="dv">0</span>)
<span class="cf">else</span>:
var <span class="op">=</span> layer.X.mean.var()
<span class="cf">else</span>:
<span class="cf">if</span> layer.kern.ARD:
var <span class="op">=</span> layer.X.var(<span class="dv">0</span>)
<span class="cf">else</span>:
var <span class="op">=</span> layer.X.var()
<span class="co"># Average 0.5 upcrossings in four standard deviations. </span>
layer.kern.lengthscale <span class="op">=</span> linear_factor<span class="op">*</span>np.sqrt(layer.kern.input_dim)<span class="op">*</span><span class="dv">2</span><span class="op">*</span><span class="dv">4</span><span class="op">*</span>np.sqrt(var)<span class="op">/</span>(<span class="dv">2</span><span class="op">*</span>np.pi)
<span class="co"># Bind the new method to the Deep GP object.</span>
deepgp.DeepGP.initialize<span class="op">=</span>initialize</code></pre></div>
<div class="sourceCode"><pre class="sourceCode python"><code class="sourceCode python"><span class="co"># Call the initalization</span>
m.initialize()</code></pre></div>
<p>Now optimize the model. The first stage of optimization is working on variational parameters and lengthscales only.</p>
<pre><code>m.optimize(messages=False,max_iters=100)</code></pre>
<p>Now we remove the constraints on the scale of the covariance functions associated with each GP and optimize again.</p>
<pre><code>for layer in m.layers:
pass #layer.kern.variance.constrain_positive(warning=False)
m.obslayer.kern.variance.constrain_positive(warning=False)
m.optimize(messages=False,max_iters=100)</code></pre>
<p>Finally, we allow the noise variance to change and optimize for a large number of iterations.</p>
<pre><code>for layer in m.layers:
layer.likelihood.variance.constrain_positive(warning=False)
m.optimize(messages=True,max_iters=10000)</code></pre>
<p>For our optimization process we define a new function.</p>
<div class="sourceCode"><pre class="sourceCode python"><code class="sourceCode python"><span class="kw">def</span> staged_optimize(<span class="va">self</span>, iters<span class="op">=</span>(<span class="dv">1000</span>,<span class="dv">1000</span>,<span class="dv">10000</span>), messages<span class="op">=</span>(<span class="va">False</span>, <span class="va">False</span>, <span class="va">True</span>)):
<span class="co">"""Optimize with parameters constrained and then with parameters released"""</span>
<span class="cf">for</span> layer <span class="kw">in</span> <span class="va">self</span>.layers:
<span class="co"># Fix the scale of each of the covariance functions.</span>
layer.kern.variance.fix(warning<span class="op">=</span><span class="va">False</span>)
layer.kern.lengthscale.fix(warning<span class="op">=</span><span class="va">False</span>)
<span class="co"># Fix the variance of the noise in each layer.</span>
layer.likelihood.variance.fix(warning<span class="op">=</span><span class="va">False</span>)
<span class="va">self</span>.optimize(messages<span class="op">=</span>messages[<span class="dv">0</span>],max_iters<span class="op">=</span>iters[<span class="dv">0</span>])
<span class="cf">for</span> layer <span class="kw">in</span> <span class="va">self</span>.layers:
layer.kern.lengthscale.constrain_positive(warning<span class="op">=</span><span class="va">False</span>)
<span class="va">self</span>.obslayer.kern.variance.constrain_positive(warning<span class="op">=</span><span class="va">False</span>)
<span class="va">self</span>.optimize(messages<span class="op">=</span>messages[<span class="dv">1</span>],max_iters<span class="op">=</span>iters[<span class="dv">1</span>])
<span class="cf">for</span> layer <span class="kw">in</span> <span class="va">self</span>.layers:
layer.kern.variance.constrain_positive(warning<span class="op">=</span><span class="va">False</span>)
layer.likelihood.variance.constrain_positive(warning<span class="op">=</span><span class="va">False</span>)
<span class="va">self</span>.optimize(messages<span class="op">=</span>messages[<span class="dv">2</span>],max_iters<span class="op">=</span>iters[<span class="dv">2</span>])
<span class="co"># Bind the new method to the Deep GP object.</span>
deepgp.DeepGP.staged_optimize<span class="op">=</span>staged_optimize</code></pre></div>
<div class="sourceCode"><pre class="sourceCode python"><code class="sourceCode python">m.staged_optimize(messages<span class="op">=</span>(<span class="va">True</span>,<span class="va">True</span>,<span class="va">True</span>))</code></pre></div>
<h3 id="plot-the-prediction">Plot the prediction</h3>
<p>The prediction of the deep GP can be extracted in a similar way to the normal GP. Although, in this case, it is an approximation to the true distribution, because the true distribution is not Gaussian.</p>
<div class="sourceCode"><pre class="sourceCode python"><code class="sourceCode python">fig, ax <span class="op">=</span> plt.subplots(figsize<span class="op">=</span>plot.big_wide_figsize)
plot.model_output(m, scale<span class="op">=</span>scale, offset<span class="op">=</span>offset, ax<span class="op">=</span>ax, xlabel<span class="op">=</span><span class="st">'year'</span>, ylabel<span class="op">=</span><span class="st">'pace min/km'</span>,
fontsize<span class="op">=</span><span class="dv">20</span>, portion<span class="op">=</span><span class="fl">0.2</span>)
ax.set_xlim(xlim)
ax.set_ylim(ylim)
mlai.write_figure(figure<span class="op">=</span>fig, filename<span class="op">=</span><span class="st">'../slides/diagrams/deepgp/olympic-marathon-deep-gp.svg'</span>,
transparent<span class="op">=</span><span class="va">True</span>, frameon<span class="op">=</span><span class="va">True</span>)</code></pre></div>
<h3 id="olympic-marathon-data-deep-gp">Olympic Marathon Data Deep GP</h3>
<object class="svgplot" align data="../slides/diagrams/deepgp/olympic-marathon-deep-gp.svg">
</object>
<div class="sourceCode"><pre class="sourceCode python"><code class="sourceCode python"><span class="kw">def</span> posterior_sample(<span class="va">self</span>, X, <span class="op">**</span>kwargs):
<span class="co">"""Give a sample from the posterior of the deep GP."""</span>
Z <span class="op">=</span> X
<span class="cf">for</span> i, layer <span class="kw">in</span> <span class="bu">enumerate</span>(<span class="bu">reversed</span>(<span class="va">self</span>.layers)):
Z <span class="op">=</span> layer.posterior_samples(Z, size<span class="op">=</span><span class="dv">1</span>, <span class="op">**</span>kwargs)[:, :, <span class="dv">0</span>]
<span class="cf">return</span> Z
deepgp.DeepGP.posterior_sample <span class="op">=</span> posterior_sample</code></pre></div>
<div class="sourceCode"><pre class="sourceCode python"><code class="sourceCode python">fig, ax <span class="op">=</span> plt.subplots(figsize<span class="op">=</span>plot.big_wide_figsize)
plot.model_sample(m, scale<span class="op">=</span>scale, offset<span class="op">=</span>offset, samps<span class="op">=</span><span class="dv">10</span>, ax<span class="op">=</span>ax,
xlabel<span class="op">=</span><span class="st">'year'</span>, ylabel<span class="op">=</span><span class="st">'pace min/km'</span>, portion <span class="op">=</span> <span class="fl">0.225</span>)
ax.set_xlim(xlim)
ax.set_ylim(ylim)
mlai.write_figure(figure<span class="op">=</span>fig, filename<span class="op">=</span><span class="st">'../slides/diagrams/deepgp/olympic-marathon-deep-gp-samples.svg'</span>,
transparent<span class="op">=</span><span class="va">True</span>, frameon<span class="op">=</span><span class="va">True</span>)</code></pre></div>
<h3 id="olympic-marathon-data-deep-gp-1" data-transition="None">Olympic Marathon Data Deep GP</h3>
<object class="svgplot" align data="../slides/diagrams/deepgp/olympic-marathon-deep-gp-samples.svg">
</object>
<h3 id="fitted-gp-for-each-layer">Fitted GP for each layer</h3>
<p>Now we explore the GPs the model has used to fit each layer. First of all, we look at the hidden layer.</p>
<div class="sourceCode"><pre class="sourceCode python"><code class="sourceCode python"><span class="kw">def</span> visualize(<span class="va">self</span>, scale<span class="op">=</span><span class="fl">1.0</span>, offset<span class="op">=</span><span class="fl">0.0</span>, xlabel<span class="op">=</span><span class="st">'input'</span>, ylabel<span class="op">=</span><span class="st">'output'</span>,
xlim<span class="op">=</span><span class="va">None</span>, ylim<span class="op">=</span><span class="va">None</span>, fontsize<span class="op">=</span><span class="dv">20</span>, portion<span class="op">=</span><span class="fl">0.2</span>,dataset<span class="op">=</span><span class="va">None</span>,
diagrams<span class="op">=</span><span class="st">'../diagrams'</span>):
<span class="co">"""Visualize the layers in a deep GP with one-d input and output."""</span>
depth <span class="op">=</span> <span class="bu">len</span>(<span class="va">self</span>.layers)
<span class="cf">if</span> dataset <span class="kw">is</span> <span class="va">None</span>:
fname <span class="op">=</span> <span class="st">'deep-gp-layer'</span>
<span class="cf">else</span>:
fname <span class="op">=</span> dataset <span class="op">+</span> <span class="st">'-deep-gp-layer'</span>
filename <span class="op">=</span> os.path.join(diagrams, fname)
last_name <span class="op">=</span> xlabel
last_x <span class="op">=</span> <span class="va">self</span>.X
<span class="cf">for</span> i, layer <span class="kw">in</span> <span class="bu">enumerate</span>(<span class="bu">reversed</span>(<span class="va">self</span>.layers)):
<span class="cf">if</span> i<span class="op">></span><span class="dv">0</span>:
plt.plot(last_x, layer.X.mean, <span class="st">'r.'</span>,markersize<span class="op">=</span><span class="dv">10</span>)
last_x<span class="op">=</span>layer.X.mean
ax<span class="op">=</span>plt.gca()
name <span class="op">=</span> <span class="st">'layer '</span> <span class="op">+</span> <span class="bu">str</span>(i)
plt.xlabel(last_name, fontsize<span class="op">=</span>fontsize)
plt.ylabel(name, fontsize<span class="op">=</span>fontsize)
last_name<span class="op">=</span>name
mlai.write_figure(filename<span class="op">=</span>filename <span class="op">+</span> <span class="st">'-'</span> <span class="op">+</span> <span class="bu">str</span>(i<span class="op">-</span><span class="dv">1</span>) <span class="op">+</span> <span class="st">'.svg'</span>,
transparent<span class="op">=</span><span class="va">True</span>, frameon<span class="op">=</span><span class="va">True</span>)
<span class="cf">if</span> i<span class="op">==</span><span class="dv">0</span> <span class="kw">and</span> xlim <span class="kw">is</span> <span class="kw">not</span> <span class="va">None</span>:
xt <span class="op">=</span> plot.pred_range(np.array(xlim), portion<span class="op">=</span><span class="fl">0.0</span>)
<span class="cf">elif</span> i<span class="op">></span><span class="dv">0</span>:
xt <span class="op">=</span> plot.pred_range(np.array(next_lim), portion<span class="op">=</span><span class="fl">0.0</span>)
<span class="cf">else</span>:
xt <span class="op">=</span> plot.pred_range(last_x, portion<span class="op">=</span>portion)
yt_mean, yt_var <span class="op">=</span> layer.predict(xt)
<span class="cf">if</span> layer<span class="op">==</span><span class="va">self</span>.obslayer:
yt_mean <span class="op">=</span> yt_mean<span class="op">*</span>scale <span class="op">+</span> offset
yt_var <span class="op">*=</span> scale<span class="op">*</span>scale
yt_sd <span class="op">=</span> np.sqrt(yt_var)
gpplot(xt,yt_mean,yt_mean<span class="op">-</span><span class="dv">2</span><span class="op">*</span>yt_sd,yt_mean<span class="op">+</span><span class="dv">2</span><span class="op">*</span>yt_sd)
ax <span class="op">=</span> plt.gca()
<span class="cf">if</span> i<span class="op">></span><span class="dv">0</span>:
ax.set_xlim(next_lim)
<span class="cf">elif</span> xlim <span class="kw">is</span> <span class="kw">not</span> <span class="va">None</span>:
ax.set_xlim(xlim)
next_lim <span class="op">=</span> plt.gca().get_ylim()
plt.plot(last_x, <span class="va">self</span>.Y<span class="op">*</span>scale <span class="op">+</span> offset, <span class="st">'r.'</span>,markersize<span class="op">=</span><span class="dv">10</span>)
plt.xlabel(last_name, fontsize<span class="op">=</span>fontsize)
plt.ylabel(ylabel, fontsize<span class="op">=</span>fontsize)
mlai.write_figure(filename<span class="op">=</span>filename <span class="op">+</span> <span class="st">'-'</span> <span class="op">+</span> <span class="bu">str</span>(i) <span class="op">+</span> <span class="st">'.svg'</span>,
transparent<span class="op">=</span><span class="va">True</span>, frameon<span class="op">=</span><span class="va">True</span>)
<span class="cf">if</span> ylim <span class="kw">is</span> <span class="kw">not</span> <span class="va">None</span>:
ax<span class="op">=</span>plt.gca()
ax.set_ylim(ylim)
<span class="co"># Bind the new method to the Deep GP object.</span>
deepgp.DeepGP.visualize<span class="op">=</span>visualize</code></pre></div>
<div class="sourceCode"><pre class="sourceCode python"><code class="sourceCode python">m.visualize(scale<span class="op">=</span>scale, offset<span class="op">=</span>offset, xlabel<span class="op">=</span><span class="st">'year'</span>,
ylabel<span class="op">=</span><span class="st">'pace min/km'</span>,xlim<span class="op">=</span>xlim, ylim<span class="op">=</span>ylim,
dataset<span class="op">=</span><span class="st">'olympic-marathon'</span>,
diagrams<span class="op">=</span><span class="st">'../slides/diagrams/deepgp'</span>)</code></pre></div>
<div class="sourceCode"><pre class="sourceCode python"><code class="sourceCode python"><span class="kw">def</span> scale_data(x, portion):
scale <span class="op">=</span> (x.<span class="bu">max</span>()<span class="op">-</span>x.<span class="bu">min</span>())<span class="op">/</span>(<span class="dv">1</span><span class="op">-</span><span class="dv">2</span><span class="op">*</span>portion)
offset <span class="op">=</span> x.<span class="bu">min</span>() <span class="op">-</span> portion<span class="op">*</span>scale
<span class="cf">return</span> (x<span class="op">-</span>offset)<span class="op">/</span>scale, scale, offset
<span class="kw">def</span> visualize_pinball(<span class="va">self</span>, ax<span class="op">=</span><span class="va">None</span>, scale<span class="op">=</span><span class="fl">1.0</span>, offset<span class="op">=</span><span class="fl">0.0</span>, xlabel<span class="op">=</span><span class="st">'input'</span>, ylabel<span class="op">=</span><span class="st">'output'</span>,
xlim<span class="op">=</span><span class="va">None</span>, ylim<span class="op">=</span><span class="va">None</span>, fontsize<span class="op">=</span><span class="dv">20</span>, portion<span class="op">=</span><span class="fl">0.2</span>, points<span class="op">=</span><span class="dv">50</span>, vertical<span class="op">=</span><span class="va">True</span>):
<span class="co">"""Visualize the layers in a deep GP with one-d input and output."""</span>
<span class="cf">if</span> ax <span class="kw">is</span> <span class="va">None</span>:
fig, ax <span class="op">=</span> plt.subplots(figsize<span class="op">=</span>plot.big_wide_figsize)
depth <span class="op">=</span> <span class="bu">len</span>(<span class="va">self</span>.layers)
last_name <span class="op">=</span> xlabel
last_x <span class="op">=</span> <span class="va">self</span>.X
<span class="co"># Recover input and output scales from output plot</span>
plot_model_output(<span class="va">self</span>, scale<span class="op">=</span>scale, offset<span class="op">=</span>offset, ax<span class="op">=</span>ax,
xlabel<span class="op">=</span>xlabel, ylabel<span class="op">=</span>ylabel,
fontsize<span class="op">=</span>fontsize, portion<span class="op">=</span>portion)
xlim<span class="op">=</span>ax.get_xlim()
xticks<span class="op">=</span>ax.get_xticks()
xtick_labels<span class="op">=</span>ax.get_xticklabels().copy()
ylim<span class="op">=</span>ax.get_ylim()