/
2018-10-30-mind-and-machine-intelligence.html
executable file
·503 lines (494 loc) · 47.8 KB
/
2018-10-30-mind-and-machine-intelligence.html
1
2
3
4
5
6
7
8
9
10
11
12
13
14
15
16
17
18
19
20
21
22
23
24
25
26
27
28
29
30
31
32
33
34
35
36
37
38
39
40
41
42
43
44
45
46
47
48
49
50
51
52
53
54
55
56
57
58
59
60
61
62
63
64
65
66
67
68
69
70
71
72
73
74
75
76
77
78
79
80
81
82
83
84
85
86
87
88
89
90
91
92
93
94
95
96
97
98
99
100
101
102
103
104
105
106
107
108
109
110
111
112
113
114
115
116
117
118
119
120
121
122
123
124
125
126
127
128
129
130
131
132
133
134
135
136
137
138
139
140
141
142
143
144
145
146
147
148
149
150
151
152
153
154
155
156
157
158
159
160
161
162
163
164
165
166
167
168
169
170
171
172
173
174
175
176
177
178
179
180
181
182
183
184
185
186
187
188
189
190
191
192
193
194
195
196
197
198
199
200
201
202
203
204
205
206
207
208
209
210
211
212
213
214
215
216
217
218
219
220
221
222
223
224
225
226
227
228
229
230
231
232
233
234
235
236
237
238
239
240
241
242
243
244
245
246
247
248
249
250
251
252
253
254
255
256
257
258
259
260
261
262
263
264
265
266
267
268
269
270
271
272
273
274
275
276
277
278
279
280
281
282
283
284
285
286
287
288
289
290
291
292
293
294
295
296
297
298
299
300
301
302
303
304
305
306
307
308
309
310
311
312
313
314
315
316
317
318
319
320
321
322
323
324
325
326
327
328
329
330
331
332
333
334
335
336
337
338
339
340
341
342
343
344
345
346
347
348
349
350
351
352
353
354
355
356
357
358
359
360
361
362
363
364
365
366
367
368
369
370
371
372
373
374
375
376
377
378
379
380
381
382
383
384
385
386
387
388
389
390
391
392
393
394
395
396
397
398
399
400
401
402
403
404
405
406
407
408
409
410
411
412
413
414
415
416
417
418
419
420
421
422
423
424
425
426
427
428
429
430
431
432
433
434
435
436
437
438
439
440
441
442
443
444
445
446
447
448
449
450
451
452
453
454
455
456
457
458
459
460
461
462
463
464
465
466
467
468
469
470
471
472
473
474
475
476
477
478
479
480
481
482
483
484
485
486
487
488
489
490
491
492
493
494
495
496
497
498
499
500
501
---
title: "Mind and Machine Intelligence"
venue: "FinTECHTalents 2018"
abstract: "What is the nature of machine intelligence and how does it differ from humans? In this talk we introduce embodiment factors. They represent the extent to which our intelligence is locked inside us. The locked in nature of our intelligence makes us fundamentally different from the machine intelligences we are creating around us. Having summarized these differences we consider the Three Ds of machine learning system design: a set of considerations to take into acount when building machine intelligences."
author:
- given: Neil D.
family: Lawrence
url: http://inverseprobability.com
institute: Amazon and University of Sheffield
twitter: lawrennd
gscholar: r3SJcvoAAAAJ
orchid:
date: 2018-10-30
published: 2018-10-30
reveal: 2018-10-30-mind-and-machine-intelligence.slides.html
layout: talk
categories:
- notes
---
<div style="display:none">
$${% include talk-notation.tex %}$$
</div>
<div id="modal-frame" class="modal">
<span class="close" onclick="closeMagnify()">×</span>
<div class="modal-figure">
<div class="figure-frame">
<div class="modal-content" id="modal01"></div>
<!--<img class="modal-content" id="object01">-->
</div>
<div class="caption-frame" id="modal-caption"></div>
</div>
</div>
<!-- Front matter -->
<!-- Front matter -->
<!-- Do not edit this file locally. -->
<!-- Do not edit this file locally. -->
<!---->
<!-- Do not edit this file locally. -->
<!--Back matter-->
<!-- Do not edit this file locally. -->
<!-- The last names to be defined. Should be defined entirely in terms of macros from above-->
<h2 id="the-diving-bell-and-the-butterfly-edit">The Diving Bell and the Butterfly <span class="editsection-bracket" style="">[</span><span class="editsection" style=""><a href="https://github.com/lawrennd/snippets/edit/main/_ai/includes/diving-bell-butterfly.md" target="_blank" onclick="ga('send', 'event', 'Edit Page', 'Edit', 'https://github.com/lawrennd/snippets/edit/main/_ai/includes/diving-bell-butterfly.md', 13);">edit</a></span><span class="editsection-bracket" style="">]</span></h2>
<div class="figure">
<div id="the-diving-bell-and-the-butterfly-2-figure" class="figure-frame">
<div class="centered centered" style="">
<img class="" src="../slides/diagrams/ai/the-diving-bell-and-the-butterfly2.jpg" width="60%" height="auto" align="center" style="background:none; border:none; box-shadow:none; display:block; margin-left:auto; margin-right:auto;vertical-align:middle">
</div>
</div>
<div id="the-diving-bell-and-the-butterfly-2-magnify" class="magnify" onclick="magnifyFigure('the-diving-bell-and-the-butterfly-2')">
<p><img class="img-button" src="{{ "/assets/images/Magnify_Large.svg" | relative_url }}" style="width:1.5ex"></p>
</div>
<div id="the-diving-bell-and-the-butterfly-2-caption" class="caption-frame">
<p>Figure: The Diving Bell and the Butterful</p>
</div>
</div>
<p>The Diving Bell and the Butterfly is the autobiography of Jean-Dominique Bauby.</p>
<p>In 1995, when he was editor-in-chief of the French Elle magazine, he suffered a stroke, which destroyed his brainstem. He became almost totally physically paralyzed, but was still mentally active. He acquired what is known as locked-in syndrome.</p>
<p>Incredibly, Bauby wrote his memoir after he became paralyzed.</p>
<p>His left eye was the only muscle he could voluntarily move, and he wrote the entire book by winking it.</p>
<div class="figure">
<div id="diving-bell-letters-figure" class="figure-frame">
<div style="text-align:center;font-size:200%">
E S A R I N T U L <br> O M D P C F B V <br> H G J Q Z Y X K W
</div>
</div>
<div id="diving-bell-letters-magnify" class="magnify" onclick="magnifyFigure('diving-bell-letters')">
<p><img class="img-button" src="{{ "/assets/images/Magnify_Large.svg" | relative_url }}" style="width:1.5ex"></p>
</div>
<div id="diving-bell-letters-caption" class="caption-frame">
<p>Figure: The ordering of the letters that Bauby used for writing his autobiography.</p>
</div>
</div>
<p>How could he do that? Well, first, they set up a mechanism where he could scan across letters and blink at the letter he wanted to use. In this way, he was able to write each letter.</p>
<p>It took him 10 months of four hours a day to write the book. Each word took two minutes to write.</p>
<p>Imagine doing all that thinking, but so little speaking, having all those thoughts and so little ability to communicate.</p>
<p>The idea behind this talk is that we are all in that situation. While not as extreme as for Bauby, we all have somewhat of a locked in intelligence.</p>
<h2 id="embodiment-factors-edit">Embodiment Factors <span class="editsection-bracket" style="">[</span><span class="editsection" style=""><a href="https://github.com/lawrennd/snippets/edit/main/_ai/includes/embodiment-factors-tedx.md" target="_blank" onclick="ga('send', 'event', 'Edit Page', 'Edit', 'https://github.com/lawrennd/snippets/edit/main/_ai/includes/embodiment-factors-tedx.md', 13);">edit</a></span><span class="editsection-bracket" style="">]</span></h2>
<div class="figure">
<div id="embodiment-factors-table-figure" class="figure-frame">
<table>
<tr>
<td>
</td>
<td align="center">
<object class="svgplot " data="../slides/diagrams/computer.svg" width="100%" style=" ">
</object>
</td>
<td align="center">
<object class="svgplot " data="../slides/diagrams/human.svg" width="100%" style=" ">
</object>
</td>
<td align="center">
<div class="centered centered" style="">
<img class="" src="../slides/diagrams/ai/Jean-Dominique_Bauby.jpg" width="150%" height="auto" align="center" style="background:none; border:none; box-shadow:none; display:block; margin-left:auto; margin-right:auto;vertical-align:middle">
</div>
</td>
</tr>
<tr>
<td>
bits/min
</td>
<td align="center">
billions
</td>
<td align="center">
2000
</td>
<td align="center">
6
</td>
</tr>
<tr>
<td>
billion<br>calculations/s
</td>
<td align="center">
~100
</td>
<td align="center">
a billion
</td>
<td align="center">
a billion
</td>
</tr>
<tr>
<td>
embodiment
</td>
<td align="center">
20 minutes
</td>
<td align="center">
5 billion years
</td>
<td align="center">
15 trillion years
</td>
</tr>
</table>
</div>
<div id="embodiment-factors-table-magnify" class="magnify" onclick="magnifyFigure('embodiment-factors-table')">
<p><img class="img-button" src="{{ "/assets/images/Magnify_Large.svg" | relative_url }}" style="width:1.5ex"></p>
</div>
<div id="embodiment-factors-table-caption" class="caption-frame">
<p>Figure: Embodiment factors are the ratio between our ability to compute and our ability to communicate. Jean Dominique Bauby suffered from locked-in syndrome. The embodiment factors show that relative to the machine we are also locked in. In the table we represent embodiment as the length of time it would take to communicate one second’s worth of computation. For computers it is a matter of minutes, but for a human, whether locked in or not, it is a matter of many millions of years.</p>
</div>
</div>
<p>Let me explain what I mean. Claude Shannon introduced a mathematical concept of information for the purposes of understanding telephone exchanges.</p>
<p>Information has many meanings, but mathematically, Shannon defined a bit of information to be the amount of information you get from tossing a coin.</p>
<p>If I toss a coin, and look at it, I know the answer. You don’t. But if I now tell you the answer I communicate to you 1 bit of information. Shannon defined this as the fundamental unit of information.</p>
<p>If I toss the coin twice, and tell you the result of both tosses, I give you two bits of information. Information is additive.</p>
<p>Shannon also estimated the average information associated with the English language. He estimated that the average information in any word is 12 bits, equivalent to twelve coin tosses.</p>
<p>So every two minutes Bauby was able to communicate 12 bits, or six bits per minute.</p>
<p>This is the information transfer rate he was limited to, the rate at which he could communicate.</p>
<p>Compare this to me, talking now. The average speaker for TEDX speaks around 160 words per minute. That’s 320 times faster than Bauby or around a 2000 bits per minute. 2000 coin tosses per minute.</p>
<p>But, just think how much thought Bauby was putting into every sentence. Imagine how carefully chosen each of his words was. Because he was communication constrained he could put more thought into each of his words. Into thinking about his audience.</p>
<p>So, his intelligence became locked in. He thinks as fast as any of us, but can communicate slower. Like the tree falling in the woods with no one there to hear it, his intelligence is embedded inside him.</p>
<p>Two thousand coin tosses per minute sounds pretty impressive, but this talk is not just about us, it’s about our computers, and the type of intelligence we are creating within them.</p>
<p>So how does two thousand compare to our digital companions? When computers talk to each other, they do so with billions of coin tosses per minute.</p>
<p>Let’s imagine for a moment, that instead of talking about communication of information, we are actually talking about money. Bauby would have 6 dollars. I would have 2000 dollars, and my computer has billions of dollars.</p>
<p>The internet has interconnected computers and equipped them with extremely high transfer rates.</p>
<p>However, by our very best estimates, computers actually think slower than us.</p>
<p>How can that be? You might ask, computers calculate much faster than me. That’s true, but underlying your conscious thoughts there are a lot of calculations going on.</p>
<p>Each thought involves many thousands, millions or billions of calculations. How many exactly, we don’t know yet, because we don’t know how the brain turns calculations into thoughts.</p>
<p>Our best estimates suggest that to simulate your brain a computer would have to be as large as the UK Met Office machine here in Exeter. That’s a 250 million pound machine, the fastest in the UK. It can do 16 billion billon calculations per second.</p>
<p>It simulates the weather across the word every day, that’s how much power we think we need to simulate our brains.</p>
<p>So, in terms of our computational power we are extraordinary, but in terms of our ability to explain ourselves, just like Bauby, we are locked in.</p>
<p>For a typical computer, to communicate everything it computes in one second, it would only take it a couple of minutes. For us to do the same would take 15 billion years.</p>
<p>If intelligence is fundamentally about processing and sharing of information. This gives us a fundamental constraint on human intelligence that dictates its nature.</p>
<p>I call this ratio between the time it takes to compute something, and the time it takes to say it, the embodiment factor <span class="citation" data-cites="Lawrence:embodiment17">(Lawrence 2017)</span>. Because it reflects how embodied our cognition is.</p>
<p>If it takes you two minutes to say the thing you have thought in a second, then you are a computer. If it takes you 15 billion years, then you are a human.</p>
<div class="figure">
<div id="lotus-49-figure" class="figure-frame">
<div class="centered centered" style="">
<img class="" src="../slides/diagrams/Lotus_49-2.jpg" width="70%" height="auto" align="center" style="background:none; border:none; box-shadow:none; display:block; margin-left:auto; margin-right:auto;vertical-align:middle">
</div>
</div>
<div id="lotus-49-magnify" class="magnify" onclick="magnifyFigure('lotus-49')">
<p><img class="img-button" src="{{ "/assets/images/Magnify_Large.svg" | relative_url }}" style="width:1.5ex"></p>
</div>
<div id="lotus-49-caption" class="caption-frame">
<p>Figure: The Lotus 49, view from the rear. The Lotus 49 was one of the last Formula One cars before the introduction of aerodynamic aids.</p>
</div>
</div>
<p>So when it comes to our ability to compute we are extraordinary, not compute in our conscious mind, but the underlying neuron firings that underpin both our consciousness, our subconsciousness as well as our motor control etc.</p>
<p>If we think of ourselves as vehicles, then we are massively overpowered. Our ability to generate derived information from raw fuel is extraordinary. Intellectually we have formula one engines.</p>
<p>But in terms of our ability to deploy that computation in actual use, to share the results of what we have inferred, we are very limited. So when you imagine the F1 car that represents a psyche, think of an F1 car with bicycle wheels.</p>
<div class="figure">
<div id="marcel-renault-figure" class="figure-frame">
<div class="centered centered" style="">
<img class="" src="../slides/diagrams/640px-Marcel_Renault_1903.jpg" width="70%" height="auto" align="center" style="background:none; border:none; box-shadow:none; display:block; margin-left:auto; margin-right:auto;vertical-align:middle">
</div>
</div>
<div id="marcel-renault-magnify" class="magnify" onclick="magnifyFigure('marcel-renault')">
<p><img class="img-button" src="{{ "/assets/images/Magnify_Large.svg" | relative_url }}" style="width:1.5ex"></p>
</div>
<div id="marcel-renault-caption" class="caption-frame">
<p>Figure: Marcel Renault races a Renault 40 cv during the Paris-Madrid race, an early Grand Prix, in 1903. Marcel died later in the race after missing a warning flag for a sharp corner at Couhé Vérac, likely due to dust reducing visibility.</p>
</div>
</div>
<p>Just think of the control a driver would have to have to deploy such power through such a narrow channel of traction. That is the beauty and the skill of the human mind.</p>
<p>In contrast, our computers are more like go-karts. Underpowered, but with well-matched tires. They can communicate far more fluidly. They are more efficient, but somehow less extraordinary, less beautiful.</p>
<div class="figure">
<div id="caleb-mcduff-figure" class="figure-frame">
<div class="centered centered" style="">
<img class="" src="../slides/diagrams/Caleb_McDuff_WIX_Silence_Racing_livery.jpg" width="70%" height="auto" align="center" style="background:none; border:none; box-shadow:none; display:block; margin-left:auto; margin-right:auto;vertical-align:middle">
</div>
</div>
<div id="caleb-mcduff-magnify" class="magnify" onclick="magnifyFigure('caleb-mcduff')">
<p><img class="img-button" src="{{ "/assets/images/Magnify_Large.svg" | relative_url }}" style="width:1.5ex"></p>
</div>
<div id="caleb-mcduff-caption" class="caption-frame">
<p>Figure: Caleb McDuff driving for WIX Silence Racing.</p>
</div>
</div>
<h2 id="human-communication-edit">Human Communication <span class="editsection-bracket" style="">[</span><span class="editsection" style=""><a href="https://github.com/lawrennd/snippets/edit/main/_ai/includes/conversation.md" target="_blank" onclick="ga('send', 'event', 'Edit Page', 'Edit', 'https://github.com/lawrennd/snippets/edit/main/_ai/includes/conversation.md', 13);">edit</a></span><span class="editsection-bracket" style="">]</span></h2>
<p>For human conversation to work, we require an internal model of who we are speaking to. We model each other, and combine our sense of who they are, who they think we are, and what has been said. This is our approach to dealing with the limited bandwidth connection we have. Empathy and understanding of intent. Mental dispositional concepts are used to augment our limited communication bandwidth.</p>
<p>Fritz Heider referred to the important point of a conversation as being that they are happenings that are “<em>psychologically represented</em> in each of the participants” (his emphasis) <span class="citation" data-cites="Heider:interpersonal58">(Heider 1958)</span></p>
<h3 id="bandwidth-constrained-conversations">Bandwidth Constrained Conversations</h3>
<div class="sourceCode" id="cb1"><pre class="sourceCode python"><code class="sourceCode python"><a class="sourceLine" id="cb1-1" data-line-number="1"><span class="im">import</span> pods</a>
<a class="sourceLine" id="cb1-2" data-line-number="2"><span class="im">from</span> ipywidgets <span class="im">import</span> IntSlider</a></code></pre></div>
<div class="sourceCode" id="cb2"><pre class="sourceCode python"><code class="sourceCode python"><a class="sourceLine" id="cb2-1" data-line-number="1">pods.notebook.display_plots(<span class="st">'anne-bob-conversation</span><span class="sc">{sample:0>3}</span><span class="st">.svg'</span>, </a>
<a class="sourceLine" id="cb2-2" data-line-number="2"> <span class="st">'../slides/diagrams'</span>, sample<span class="op">=</span>IntSlider(<span class="dv">0</span>, <span class="dv">0</span>, <span class="dv">7</span>, <span class="dv">1</span>))</a></code></pre></div>
<div class="figure">
<div id="anne-bob-conversation-civil-figure" class="figure-frame">
<object class="svgplot " data="../slides/diagrams/anne-bob-conversation006.svg" width="70%" style=" ">
</object>
</div>
<div id="anne-bob-conversation-civil-magnify" class="magnify" onclick="magnifyFigure('anne-bob-conversation-civil')">
<p><img class="img-button" src="{{ "/assets/images/Magnify_Large.svg" | relative_url }}" style="width:1.5ex"></p>
</div>
<div id="anne-bob-conversation-civil-caption" class="caption-frame">
<p>Figure: Conversation relies on internal models of other individuals.</p>
</div>
</div>
<div class="figure">
<div id="anne-bob-conversation-argument-figure" class="figure-frame">
<object class="svgplot " data="../slides/diagrams/anne-bob-conversation007.svg" width="70%" style=" ">
</object>
</div>
<div id="anne-bob-conversation-argument-magnify" class="magnify" onclick="magnifyFigure('anne-bob-conversation-argument')">
<p><img class="img-button" src="{{ "/assets/images/Magnify_Large.svg" | relative_url }}" style="width:1.5ex"></p>
</div>
<div id="anne-bob-conversation-argument-caption" class="caption-frame">
<p>Figure: Misunderstanding of context and who we are talking to leads to arguments.</p>
</div>
</div>
<p>Embodiment factors imply that, in our communication between humans, what is <em>not</em> said is, perhaps, more important than what is said. To communicate with each other we need to have a model of who each of us are.</p>
<p>To aid this, in society, we are required to perform roles. Whether as a parent, a teacher, an employee or a boss. Each of these roles requires that we conform to certain standards of behaviour to facilitate communication between ourselves.</p>
<p>Control of self is vitally important to these communications.</p>
<p>The high availability of data available to humans undermines human-to-human communication channels by providing new routes to undermining our control of self.</p>
<div class="figure">
<div id="hilbert-info-growth-figure" class="figure-frame">
<object class="svgplot " data="../slides/diagrams/data-science/hilbert-info-growth.svg" width="80%" style=" ">
</object>
</div>
<div id="hilbert-info-growth-magnify" class="magnify" onclick="magnifyFigure('hilbert-info-growth')">
<p><img class="img-button" src="{{ "/assets/images/Magnify_Large.svg" | relative_url }}" style="width:1.5ex"></p>
</div>
<div id="hilbert-info-growth-caption" class="caption-frame">
<p>Figure: The Global storage capacity between 1986 and 2007 <span class="citation" data-cites="Hilbert:information11">Hilbert and López (2011)</span></p>
</div>
</div>
<h3 id="a-six-word-novel-edit">A Six Word Novel <span class="editsection-bracket" style="">[</span><span class="editsection" style=""><a href="https://github.com/lawrennd/snippets/edit/main/_ai/includes/baby-shoes.md" target="_blank" onclick="ga('send', 'event', 'Edit Page', 'Edit', 'https://github.com/lawrennd/snippets/edit/main/_ai/includes/baby-shoes.md', 13);">edit</a></span><span class="editsection-bracket" style="">]</span></h3>
<div class="figure">
<div id="classic-baby-shoes-figure" class="figure-frame">
<div class="centered centered" style="">
<img class="" src="../slides/diagrams/Classic_baby_shoes.jpg" width="60%" height="auto" align="center" style="background:none; border:none; box-shadow:none; display:block; margin-left:auto; margin-right:auto;vertical-align:middle">
</div>
<center>
For sale: baby shoes, never worn
</center>
</div>
<div id="classic-baby-shoes-magnify" class="magnify" onclick="magnifyFigure('classic-baby-shoes')">
<p><img class="img-button" src="{{ "/assets/images/Magnify_Large.svg" | relative_url }}" style="width:1.5ex"></p>
</div>
<div id="classic-baby-shoes-caption" class="caption-frame">
<p>Figure: Consider the six word novel, apocraphally credited to Ernest Hemingway, “For sale: baby shoes, never worn”. To understand what that means to a human, you need a great deal of additional context. Context that is not directly accessible to a machine that has not got both the evolved and contextual understanding of our own condition to realize both the implication of the advert and what that implication means emotionally to the previous owner.</p>
</div>
</div>
<p>But this is a very different kind of intelligence than ours. A computer cannot understand the depth of the Ernest Hemingway’s apocryphal six word novel: “For Sale, Baby Shoes, Never worn”, because it isn’t equipped with that ability to model the complexity of humanity that underlies that statement.</p>
<h2 id="computer-conversations-edit">Computer Conversations <span class="editsection-bracket" style="">[</span><span class="editsection" style=""><a href="https://github.com/lawrennd/snippets/edit/main/_ai/includes/conversation-computer.md" target="_blank" onclick="ga('send', 'event', 'Edit Page', 'Edit', 'https://github.com/lawrennd/snippets/edit/main/_ai/includes/conversation-computer.md', 13);">edit</a></span><span class="editsection-bracket" style="">]</span></h2>
<div class="sourceCode" id="cb3"><pre class="sourceCode python"><code class="sourceCode python"><a class="sourceLine" id="cb3-1" data-line-number="1"><span class="im">import</span> pods</a>
<a class="sourceLine" id="cb3-2" data-line-number="2"><span class="im">from</span> ipywidgets <span class="im">import</span> IntSlider</a></code></pre></div>
<div class="sourceCode" id="cb4"><pre class="sourceCode python"><code class="sourceCode python"><a class="sourceLine" id="cb4-1" data-line-number="1">pods.notebook.display_plots(<span class="st">'anne-bob-conversation</span><span class="sc">{sample:0>3}</span><span class="st">.svg'</span>, </a>
<a class="sourceLine" id="cb4-2" data-line-number="2"> <span class="st">'../slides/diagrams'</span>, sample<span class="op">=</span>IntSlider(<span class="dv">0</span>, <span class="dv">0</span>, <span class="dv">7</span>, <span class="dv">1</span>))</a></code></pre></div>
<div class="figure">
<div id="anne-computer-conversation-6-figure" class="figure-frame">
<object class="svgplot " data="../slides/diagrams/anne-computer-conversation006.svg" width="80%" style=" ">
</object>
</div>
<div id="anne-computer-conversation-6-magnify" class="magnify" onclick="magnifyFigure('anne-computer-conversation-6')">
<p><img class="img-button" src="{{ "/assets/images/Magnify_Large.svg" | relative_url }}" style="width:1.5ex"></p>
</div>
<div id="anne-computer-conversation-6-caption" class="caption-frame">
<p>Figure: Conversation relies on internal models of other individuals.</p>
</div>
</div>
<div class="figure">
<div id="anne-computer-conversation-8-figure" class="figure-frame">
<object class="svgplot " data="../slides/diagrams/anne-computer-conversation007.svg" width="80%" style=" ">
</object>
</div>
<div id="anne-computer-conversation-8-magnify" class="magnify" onclick="magnifyFigure('anne-computer-conversation-8')">
<p><img class="img-button" src="{{ "/assets/images/Magnify_Large.svg" | relative_url }}" style="width:1.5ex"></p>
</div>
<div id="anne-computer-conversation-8-caption" class="caption-frame">
<p>Figure: Misunderstanding of context and who we are talking to leads to arguments.</p>
</div>
</div>
<p>Similarly, we find it difficult to comprehend how computers are making decisions. Because they do so with more data than we can possibly imagine.</p>
<p>In many respects, this is not a problem, it’s a good thing. Computers and us are good at different things. But when we interact with a computer, when it acts in a different way to us, we need to remember why.</p>
<p>Just as the first step to getting along with other humans is understanding other humans, so it needs to be with getting along with our computers.</p>
<p>Embodiment factors explain why, at the same time, computers are so impressive in simulating our weather, but so poor at predicting our moods. Our complexity is greater than that of our weather, and each of us is tuned to read and respond to one another.</p>
<p>Their intelligence is different. It is based on very large quantities of data that we cannot absorb. Our computers don’t have a complex internal model of who we are. They don’t understand the human condition. They are not tuned to respond to us as we are to each other.</p>
<p>Embodiment factors encapsulate a profound thing about the nature of humans. Our locked in intelligence means that we are striving to communicate, so we put a lot of thought into what we’re communicating with. And if we’re communicating with something complex, we naturally anthropomorphize them.</p>
<p>We give our dogs, our cats and our cars human motivations. We do the same with our computers. We anthropomorphize them. We assume that they have the same objectives as us and the same constraints. They don’t.</p>
<p>This means, that when we worry about artificial intelligence, we worry about the wrong things. We fear computers that behave like more powerful versions of ourselves that will struggle to outcompete us.</p>
<p>In reality, the challenge is that our computers cannot be human enough. They cannot understand us with the depth we understand one another. They drop below our cognitive radar and operate outside our mental models.</p>
<p>The real danger is that computers don’t anthropomorphize. They’ll make decisions in isolation from us without our supervision, because they can’t communicate truly and deeply with us.</p>
<h1 id="evolved-relationship-with-information-edit">Evolved Relationship with Information <span class="editsection-bracket" style="">[</span><span class="editsection" style=""><a href="https://github.com/lawrennd/snippets/edit/main/_data-science/includes/evolved-relationship.md" target="_blank" onclick="ga('send', 'event', 'Edit Page', 'Edit', 'https://github.com/lawrennd/snippets/edit/main/_data-science/includes/evolved-relationship.md', 13);">edit</a></span><span class="editsection-bracket" style="">]</span></h1>
<p>The high bandwidth of computers has resulted in a close relationship between the computer and data. Large amounts of information can flow between the two. The degree to which the computer is mediating our relationship with data means that we should consider it an intermediary.</p>
<p>Originaly our low bandwith relationship with data was affected by two characteristics. Firstly, our tendency to over-interpret driven by our need to extract as much knowledge from our low bandwidth information channel as possible. Secondly, by our improved understanding of the domain of <em>mathematical</em> statistics and how our cognitive biases can mislead us.</p>
<p>With this new set up there is a potential for assimilating far more information via the computer, but the computer can present this to us in various ways. If it’s motives are not aligned with ours then it can misrepresent the information. This needn’t be nefarious it can be simply as a result of the computer pursuing a different objective from us. For example, if the computer is aiming to maximize our interaction time that may be a different objective from ours which may be to summarize information in a representative manner in the <em>shortest</em> possible length of time.</p>
<p>For example, for me, it was a common experience to pick up my telephone with the intention of checking when my next appointment was, but to soon find myself distracted by another application on the phone, and end up reading something on the internet. By the time I’d finished reading, I would often have forgotten the reason I picked up my phone in the first place.</p>
<p>There are great benefits to be had from the huge amount of information we can unlock from this evolved relationship between us and data. In biology, large scale data sharing has been driven by a revolution in genomic, transcriptomic and epigenomic measurement. The improved inferences that that can be drawn through summarizing data by computer have fundamentally changed the nature of biological science, now this phenomenon is also infuencing us in our daily lives as data measured by <em>happenstance</em> is increasingly used to characterize us.</p>
<p>Better mediation of this flow actually requires a better understanding of human-computer interaction. This in turn involves understanding our own intelligence better, what its cognitive biases are and how these might mislead us.</p>
<p>For further thoughts see Guardian article on <a href="https://www.theguardian.com/media-network/2015/jul/23/data-driven-economy-marketing">marketing in the internet era</a> from 2015.</p>
<p>You can also check my blog post on <a href="http://inverseprobability.com/2015/12/04/what-kind-of-ai">System Zero</a>..</p>
<div class="figure">
<div id="trinity-human-data-computer-figure" class="figure-frame">
<object class="svgplot " data="../slides/diagrams/data-science/new-flow-of-information002.svg" width="50%" style=" ">
</object>
</div>
<div id="trinity-human-data-computer-magnify" class="magnify" onclick="magnifyFigure('trinity-human-data-computer')">
<p><img class="img-button" src="{{ "/assets/images/Magnify_Large.svg" | relative_url }}" style="width:1.5ex"></p>
</div>
<div id="trinity-human-data-computer-caption" class="caption-frame">
<p>Figure: The trinity of human, data and computer, and highlights the modern phenomenon. The communication channel between computer and data now has an extremely high bandwidth. The channel between human and computer and the channel between data and human is narrow. New direction of information flow, information is reaching us mediated by the computer.</p>
</div>
</div>
<h2 id="the-centrifugal-governor-edit">The Centrifugal Governor <span class="editsection-bracket" style="">[</span><span class="editsection" style=""><a href="https://github.com/lawrennd/snippets/edit/main/_ai/includes/centrifugal-governor.md" target="_blank" onclick="ga('send', 'event', 'Edit Page', 'Edit', 'https://github.com/lawrennd/snippets/edit/main/_ai/includes/centrifugal-governor.md', 13);">edit</a></span><span class="editsection-bracket" style="">]</span></h2>
<div class="figure">
<div id="science-holborn-viaduct-figure" class="figure-frame">
<div class="centered centered" style="">
<img class="" src="../slides/diagrams/science-holborn-viaduct.jpg" width="50%" height="auto" align="center" style="background:none; border:none; box-shadow:none; display:block; margin-left:auto; margin-right:auto;vertical-align:middle">
</div>
</div>
<div id="science-holborn-viaduct-magnify" class="magnify" onclick="magnifyFigure('science-holborn-viaduct')">
<p><img class="img-button" src="{{ "/assets/images/Magnify_Large.svg" | relative_url }}" style="width:1.5ex"></p>
</div>
<div id="science-holborn-viaduct-caption" class="caption-frame">
<p>Figure: Centrifugal governor as held by “Science” on Holborn Viaduct</p>
</div>
</div>
<h2 id="boulton-and-watts-steam-engine-edit">Boulton and Watt’s Steam Engine <span class="editsection-bracket" style="">[</span><span class="editsection" style=""><a href="https://github.com/lawrennd/snippets/edit/main/_ai/includes/watt-steam-engine.md" target="_blank" onclick="ga('send', 'event', 'Edit Page', 'Edit', 'https://github.com/lawrennd/snippets/edit/main/_ai/includes/watt-steam-engine.md', 13);">edit</a></span><span class="editsection-bracket" style="">]</span></h2>
<div class="figure">
<div id="steam-engine-boulton-watt-figure" class="figure-frame">
<div class="centered" style="">
<img class="negate" src="../slides/diagrams/SteamEngine_Boulton&Watt_1784.png" width="70%" height="auto" align="center" style="background:none; border:none; box-shadow:none; display:block; margin-left:auto; margin-right:auto;vertical-align:middle">
</div>
</div>
<div id="steam-engine-boulton-watt-magnify" class="magnify" onclick="magnifyFigure('steam-engine-boulton-watt')">
<p><img class="img-button" src="{{ "/assets/images/Magnify_Large.svg" | relative_url }}" style="width:1.5ex"></p>
</div>
<div id="steam-engine-boulton-watt-caption" class="caption-frame">
<p>Figure: Watt’s Steam Engine which made Steam Power Efficient and Practical.</p>
</div>
</div>
<p>James Watt’s steam engine contained an early machine learning device. In the same way that modern systems are component based, his engine was composed of components. One of which is a speed regulator sometimes known as <em>Watt’s governor</em>. The two balls in the center of the image, when spun fast, rise, and through a linkage mechanism.</p>
<p>The centrifugal governor was made famous by Boulton and Watt when it was deployed in the steam engine. Studying stability in the governor is the main subject of James Clerk Maxwell’s paper on the theoretical analysis of governors <span class="citation" data-cites="Maxwell:governors1867">(Maxwell 1867)</span>. This paper is a founding paper of control theory. In an acknowledgment of its influence, Wiener used the name <a href="https://en.wikipedia.org/wiki/Cybernetics"><em>cybernetics</em></a> to describe the field of control and communication in animals and the machine <span class="citation" data-cites="Wiener:cybernetics48">(Wiener 1948)</span>. Cybernetics is the Greek word for governor, which comes from the latin for helmsman.</p>
<p>A governor is one of the simplest artificial intelligence systems. It senses the speed of an engine, and acts to change the position of the valve on the engine to slow it down.</p>
<p>Although it’s a mechanical system a governor can be seen as automating a role that a human would have traditionally played. It is an early example of artificial intelligence.</p>
<p>The centrifugal governor has several parameters, the weight of the balls used, the length of the linkages and the limits on the balls movement.</p>
<p>Two principle differences exist between the centrifugal governor and artificial intelligence systems of today.</p>
<ol type="1">
<li>The centrifugal governor is a physical system and it is an integral part of a wider physical system that it regulates (the engine).</li>
<li>The parameters of the governor were set by hand, our modern artificial intelligence systems have their parameters set by <em>data</em>.</li>
</ol>
<div class="figure">
<div id="centrifugal-governor-figure" class="figure-frame">
<div class="centered" style="">
<img class="negate" src="../slides/diagrams/Centrifugal_governor.png" width="70%" height="auto" align="center" style="background:none; border:none; box-shadow:none; display:block; margin-left:auto; margin-right:auto;vertical-align:middle">
</div>
</div>
<div id="centrifugal-governor-magnify" class="magnify" onclick="magnifyFigure('centrifugal-governor')">
<p><img class="img-button" src="{{ "/assets/images/Magnify_Large.svg" | relative_url }}" style="width:1.5ex"></p>
</div>
<div id="centrifugal-governor-caption" class="caption-frame">
<p>Figure: The centrifugal governor, an early example of a decision making system. The parameters of the governor include the lengths of the linkages (which effect how far the throttle opens in response to movement in the balls), the weight of the balls (which effects inertia) and the limits of to which the balls can rise.</p>
</div>
</div>
<p>This has the basic components of sense and act that we expect in an intelligent system, and this system saved the need for a human operator to manually adjust the system in the case of overspeed. Overspeed has the potential to destroy an engine, so the governor operates as a safety device.</p>
<p>The first wave of automation did bring about sabotoage as a worker’s response. But if machinery was sabotaged, for example, if the linkage between sensor (the spinning balls) and action (the valve closure) was broken, this would be obvious to the engine operator at start up time. The machine could be repaired before operation.</p>
<h2 id="what-is-machine-learning-edit">What is Machine Learning? <span class="editsection-bracket" style="">[</span><span class="editsection" style=""><a href="https://github.com/lawrennd/snippets/edit/main/_ml/includes/what-is-ml-2.md" target="_blank" onclick="ga('send', 'event', 'Edit Page', 'Edit', 'https://github.com/lawrennd/snippets/edit/main/_ml/includes/what-is-ml-2.md', 13);">edit</a></span><span class="editsection-bracket" style="">]</span></h2>
<p>Machine learning allows us to extract knowledge from data to form a prediction.</p>
<p><br /><span class="math display">$$\text{data} + \text{model} \xrightarrow{\text{compute}} \text{prediction}$$</span><br /></p>
<p>A machine learning prediction is made by combining a model with data to form the prediction. The manner in which this is done gives us the machine learning <em>algorithm</em>.</p>
<p>Machine learning models are <em>mathematical models</em> which make weak assumptions about data, e.g. smoothness assumptions. By combining these assumptions with the data, we observe we can interpolate between data points or, occasionally, extrapolate into the future.</p>
<p>Machine learning is a technology which strongly overlaps with the methodology of statistics. From a historical/philosophical view point, machine learning differs from statistics in that the focus in the machine learning community has been primarily on accuracy of prediction, whereas the focus in statistics is typically on the interpretability of a model and/or validating a hypothesis through data collection.</p>
<p>The rapid increase in the availability of compute and data has led to the increased prominence of machine learning. This prominence is surfacing in two different but overlapping domains: data science and artificial intelligence.</p>
<h2 id="from-model-to-decision-edit">From Model to Decision <span class="editsection-bracket" style="">[</span><span class="editsection" style=""><a href="https://github.com/lawrennd/snippets/edit/main/_ml/includes/what-is-ml-end-to-end.md" target="_blank" onclick="ga('send', 'event', 'Edit Page', 'Edit', 'https://github.com/lawrennd/snippets/edit/main/_ml/includes/what-is-ml-end-to-end.md', 13);">edit</a></span><span class="editsection-bracket" style="">]</span></h2>
<p>The real challenge, however, is end-to-end decision making. Taking information from the environment and using it to drive decision making to achieve goals.</p>
<h1 id="deep-learning-edit">Deep Learning <span class="editsection-bracket" style="">[</span><span class="editsection" style=""><a href="https://github.com/lawrennd/snippets/edit/main/_ml/includes/deep-learning-overview.md" target="_blank" onclick="ga('send', 'event', 'Edit Page', 'Edit', 'https://github.com/lawrennd/snippets/edit/main/_ml/includes/deep-learning-overview.md', 13);">edit</a></span><span class="editsection-bracket" style="">]</span></h1>
<!-- No slide titles in this context -->
<h3 id="deepface-edit">DeepFace <span class="editsection-bracket" style="">[</span><span class="editsection" style=""><a href="https://github.com/lawrennd/snippets/edit/main/_ml/includes/deep-face.md" target="_blank" onclick="ga('send', 'event', 'Edit Page', 'Edit', 'https://github.com/lawrennd/snippets/edit/main/_ml/includes/deep-face.md', 13);">edit</a></span><span class="editsection-bracket" style="">]</span></h3>
<div class="figure">
<div id="deep-face-figure" class="figure-frame">
<div class="centered" style="">
<img class="" src="../slides/diagrams/deepface_neg.png" width="100%" height="auto" align="center" style="background:none; border:none; box-shadow:none; display:block; margin-left:auto; margin-right:auto;vertical-align:middle">
</div>
</div>
<div id="deep-face-magnify" class="magnify" onclick="magnifyFigure('deep-face')">
<p><img class="img-button" src="{{ "/assets/images/Magnify_Large.svg" | relative_url }}" style="width:1.5ex"></p>
</div>
<div id="deep-face-caption" class="caption-frame">
<p>Figure: The DeepFace architecture <span class="citation" data-cites="Taigman:deepface14">(Taigman et al. 2014)</span>, visualized through colors to represent the functional mappings at each layer. There are 120 million parameters in the model.</p>
</div>
</div>
<p>The DeepFace architecture <span class="citation" data-cites="Taigman:deepface14">(Taigman et al. 2014)</span> consists of layers that deal with <em>translation</em> and <em>rotational</em> invariances. These layers are followed by three locally-connected layers and two fully-connected layers. Color illustrates feature maps produced at each layer. The neural network includes more than 120 million parameters, where more than 95% come from the local and fully connected layers.</p>
<h3 id="deep-learning-as-pinball-edit">Deep Learning as Pinball <span class="editsection-bracket" style="">[</span><span class="editsection" style=""><a href="https://github.com/lawrennd/snippets/edit/main/_ml/includes/deep-learning-as-pinball.md" target="_blank" onclick="ga('send', 'event', 'Edit Page', 'Edit', 'https://github.com/lawrennd/snippets/edit/main/_ml/includes/deep-learning-as-pinball.md', 13);">edit</a></span><span class="editsection-bracket" style="">]</span></h3>
<div class="figure">
<div id="early-pinball-figure" class="figure-frame">
<div class="centered centered" style="">
<img class="" src="../slides/diagrams/576px-Early_Pinball.jpg" width="50%" height="auto" align="center" style="background:none; border:none; box-shadow:none; display:block; margin-left:auto; margin-right:auto;vertical-align:middle">
</div>
</div>
<div id="early-pinball-magnify" class="magnify" onclick="magnifyFigure('early-pinball')">
<p><img class="img-button" src="{{ "/assets/images/Magnify_Large.svg" | relative_url }}" style="width:1.5ex"></p>
</div>
<div id="early-pinball-caption" class="caption-frame">
<p>Figure: Deep learning models are composition of simple functions. We can think of a pinball machine as an analogy. Each layer of pins corresponds to one of the layers of functions in the model. Input data is represented by the location of the ball from left to right when it is dropped in from the top. Output class comes from the position of the ball as it leaves the pins at the bottom.</p>
</div>
</div>
<p>Sometimes deep learning models are described as being like the brain, or too complex to understand, but one analogy I find useful to help the gist of these models is to think of them as being similar to early pin ball machines.</p>
<p>In a deep neural network, we input a number (or numbers), whereas in pinball, we input a ball.</p>
<p>Think of the location of the ball on the left-right axis as a single number. Our simple pinball machine can only take one number at a time. As the ball falls through the machine, each layer of pins can be thought of as a different layer of ‘neurons’. Each layer acts to move the ball from left to right.</p>
<p>In a pinball machine, when the ball gets to the bottom it might fall into a hole defining a score, in a neural network, that is equivalent to the decision: a classification of the input object.</p>
<p>An image has more than one number associated with it, so it is like playing pinball in a <em>hyper-space</em>.</p>
<div class="sourceCode" id="cb5"><pre class="sourceCode python"><code class="sourceCode python"><a class="sourceLine" id="cb5-1" data-line-number="1"><span class="im">import</span> pods</a>
<a class="sourceLine" id="cb5-2" data-line-number="2"><span class="im">from</span> ipywidgets <span class="im">import</span> IntSlider</a></code></pre></div>
<div class="sourceCode" id="cb6"><pre class="sourceCode python"><code class="sourceCode python"><a class="sourceLine" id="cb6-1" data-line-number="1">pods.notebook.display_plots(<span class="st">'pinball</span><span class="sc">{sample:0>3}</span><span class="st">.svg'</span>, </a>
<a class="sourceLine" id="cb6-2" data-line-number="2"> <span class="st">'../slides/diagrams'</span>,</a>
<a class="sourceLine" id="cb6-3" data-line-number="3"> sample<span class="op">=</span>IntSlider(<span class="dv">1</span>, <span class="dv">1</span>, <span class="dv">2</span>, <span class="dv">1</span>))</a></code></pre></div>
<div class="figure">
<div id="pinball-initialization-figure" class="figure-frame">
<object class="svgplot " data="../slides/diagrams/pinball001.svg" width="80%" style=" ">
</object>
</div>
<div id="pinball-initialization-magnify" class="magnify" onclick="magnifyFigure('pinball-initialization')">
<p><img class="img-button" src="{{ "/assets/images/Magnify_Large.svg" | relative_url }}" style="width:1.5ex"></p>
</div>
<div id="pinball-initialization-caption" class="caption-frame">
<p>Figure: At initialization, the pins, which represent the parameters of the function, aren’t in the right place to bring the balls to the correct decisions.</p>
</div>
</div>
<div class="figure">
<div id="pinball-trained-figure" class="figure-frame">
<object class="svgplot " data="../slides/diagrams/pinball002.svg" width="80%" style=" ">
</object>
</div>
<div id="pinball-trained-magnify" class="magnify" onclick="magnifyFigure('pinball-trained')">
<p><img class="img-button" src="{{ "/assets/images/Magnify_Large.svg" | relative_url }}" style="width:1.5ex"></p>
</div>
<div id="pinball-trained-caption" class="caption-frame">
<p>Figure: After learning the pins are now in the right place to bring the balls to the correct decisions.</p>
</div>
</div>
<p>Learning involves moving all the pins to be in the correct position, so that the ball ends up in the right place when it’s fallen through the machine. But moving all these pins in hyperspace can be difficult.</p>
<p>In a hyper-space you have to put a lot of data through the machine for to explore the positions of all the pins. Even when you feed many millions of data points through the machine, there are likely to be regions in the hyper-space where no ball has passed. When future test data passes through the machine in a new route unusual things can happen.</p>
<p><em>Adversarial examples</em> exploit this high dimensional space. If you have access to the pinball machine, you can use gradient methods to find a position for the ball in the hyper space where the image looks like one thing, but will be classified as another.</p>
<p>Probabilistic methods explore more of the space by considering a range of possible paths for the ball through the machine. This helps to make them more data efficient and gives some robustness to adversarial examples.</p>
<h1 id="references" class="unnumbered">References</h1>
<div id="refs" class="references">
<div id="ref-Heider:interpersonal58">
<p>Heider, Fritz. 1958. <em>The Psychology of Interpersonal Relations</em>. John Wiley.</p>
</div>
<div id="ref-Hilbert:information11">
<p>Hilbert, Martin, and Priscila López. 2011. “The World’s Technological Capcity to Store, Communicate and Compute Information.” <em>Science</em> 332 (6025): 60–65. <a href="https://doi.org/10.1126/science.1200970" class="uri">https://doi.org/10.1126/science.1200970</a>.</p>
</div>
<div id="ref-Lawrence:embodiment17">
<p>Lawrence, Neil D. 2017. “Living Together: Mind and Machine Intelligence.” arXiv. <a href="https://arxiv.org/abs/1705.07996" class="uri">https://arxiv.org/abs/1705.07996</a>.</p>
</div>
<div id="ref-Maxwell:governors1867">
<p>Maxwell, James Clerk. 1867. “On Governors.” <em>Proceedings of the Royal Society of London</em> 16. The Royal Society: 270–83. <a href="http://www.jstor.org/stable/112510" class="uri">http://www.jstor.org/stable/112510</a>.</p>
</div>
<div id="ref-Taigman:deepface14">
<p>Taigman, Yaniv, Ming Yang, Marc’Aurelio Ranzato, and Lior Wolf. 2014. “DeepFace: Closing the Gap to Human-Level Performance in Face Verification.” In <em>Proceedings of the IEEE Computer Society Conference on Computer Vision and Pattern Recognition</em>. <a href="https://doi.org/10.1109/CVPR.2014.220" class="uri">https://doi.org/10.1109/CVPR.2014.220</a>.</p>
</div>
<div id="ref-Wiener:cybernetics48">
<p>Wiener, Norbert. 1948. <em>Cybernetics: Control and Communication in the Animal and the Machine</em>. Cambridge, MA: MIT Press.</p>
</div>
</div>