Permalink
Browse files

chapter 10 img fixes, img filename cleanup

  • Loading branch information...
1 parent 01530b7 commit b91c0a873764146848c584df5f933a48875936f6 @evanemolo evanemolo committed Jul 25, 2012
View
@@ -2,18 +2,4 @@
public/noc_html/index.html
public/noc_pdf/index.html
public/noc_pdf/index.pdf
-public/book.asc.xml
-
-public/chapters/01_vectors.asc.BACKUP.5464.asc
-public/chapters/01_vectors.asc.BACKUP.5584.asc
-public/chapters/01_vectors.asc.BASE.5464.asc
-public/chapters/01_vectors.asc.BASE.5584.asc
-public/chapters/01_vectors.asc.LOCAL.5464.asc
-public/chapters/01_vectors.asc.LOCAL.5584.asc
-public/chapters/01_vectors.asc.REMOTE.5464.asc
-public/chapters/01_vectors.asc.REMOTE.5584.asc
-public/chapters/01_vectors.asc.orig
-public/chapters/02_forces.asc.BACKUP.5678.asc
-public/chapters/02_forces.asc.BASE.5678.asc
-public/chapters/02_forces.asc.LOCAL.5678.asc
-public/chapters/02_forces.asc.REMOTE.5678.asc
+public/book.asc.xml
@@ -12,7 +12,7 @@ We’re at the end of our story. This is the last “official” chapter of th
The human brain can be described as a biological neural network—an interconnected web of neurons transmitting elaborate patterns of electrical signals. Dendrites receive input signals and, based on those inputs, fire an output signal via an axon. Or something like that. How the human brain actually works is an elaborate and complex mystery, one that we certainly are not going to attempt to tackle in rigorous detail in this chapter.
[[chapter10_figure1]]
-image::imgs/chapter10/ch10_01.png[Figure 10.1]
+image::imgs/chapter10/ch10_01.png[alt="Figure 10.1"]
The good news is that developing engaging animated systems with code does not required scientific rigor or accuracy, as we’ve learned throughout this book. We can simply be inspired by the idea of brain function.
@@ -30,7 +30,7 @@ It’s probably pretty obvious to you that there are problems that are incredibl
The most common application of neural networks in computing today is to perform one of these “easy-for-a-human, difficult-for-a-machine” tasks, often referred to as pattern recognition. Applications range from optical character recognition (turning printed or handwritten scans into digital text) to facial recognition. We don’t have the time or need to use some of these more elaborate artificial intelligence algorithms here, but if you are interested in researching neural networks, I’d recommend the books _Artificial Intelligence: A Modern Approach_ by Stuart J. Russell and Peter Norvig and _AI for Game Developers_ by David M. Bourg and Glenn Seemann.
[[chapter10_figure2]]
-image::imgs/chapter10/ch10_02.png[alt="Figure 10.2",classname="half-width-right"]
+image::imgs/chapter10/ch10_02.png[classname="half-width-right",alt="Figure 10.2"]
A neural network is a “connectionist” computational system. The computational systems we write are procedural; a program starts at the first line of code, executes it, and goes onto the next, following instructions in a linear fashion. A true neural network does not follow a linear path. Rather, information is processed collectively, in parallel throughout a network of nodes (the nodes, in this case, being neurons).
@@ -648,7 +648,7 @@ Let’s take a simpler example, where the Vehicle simply wants to stay close to
----
[[chapter10_figure10]]
-image::imgs/chapter10/ch10_10.png[alt="Figure 10.10",classname="half-width-right"]
+image::imgs/chapter10/ch10_10.png[classname="half-width-right",alt="Figure 10.10"]
Here we are passing the brain a copy of all the inputs (which it will need for error correction) as well as an observation about its environment: a PVector that points from its current location to where it desires to be. This PVector essentially serves as the error—the longer the PVector, the worse the Vehicle is performing; the shorter, the better (see Figure 10.10).
@@ -783,28 +783,28 @@ Try different rules for reinforcement learning. What if some targets are desira
Yes, a perceptron can have multiple inputs, but it is still a lonely neuron. The power of neural networks comes in the networking itself. Perceptrons are, sadly, incredibly limited in their abilities. If you read an AI textbook, it will say that a Perceptron can only solve linearly separable problems. What’s a linearly separable problem? Let’s take a look at our first example, which determined whether points were on one side of a line or the other.
[[chapter10_figure11]]
-image::imgs/chapter10/ch10_11.png[Figure 10.11]
+image::imgs/chapter10/ch10_11.png[alt="Figure 10.11"]
On the left of Figure 10.11, we have classic linearly separable data. Graph all of the possibilities; if you can classify the data with a straight line, then it is linearly separable. On the right, however, is non-linearly separable data. You can’t draw a straight line to separate the black dots from the gray ones.
One of the simplest examples of a non-linearly separable problem is XOR, or “exclusive or.” We’re all familiar with AND. For *_A_* AND *_B_* to be true, both *_A_* and *_B_* must be true. With OR, either *_A_* or *_B_* can be true for *_A_* OR *_B_* to evaluate as true. These are both linearly separable problems. Let’s look at the solution space, a “truth table.”
[[chapter10_figure12]]
-image::imgs/chapter10/ch10_12.png[Figure 10.12]
+image::imgs/chapter10/ch10_12.png[alt="Figure 10.12"]
See how you can draw a line to separate the true outputs from the false ones?
*_XOR_* is the equivalent of *_OR_* and *_NOT AND_*. In other words, *_A_* *_XOR_* *_B_* only evaluates to true if one of them is true. If both are false or both are true, then we get false. Take a look at the following truth table.
[[chapter10_figure13]]
-image::imgs/chapter10/ch10_13.png[Figure 10.13]
+image::imgs/chapter10/ch10_13.png[alt="Figure 10.13"]
This is not linearly separable. Try to draw a line to separate the true outputs from the false ones —you can’t!
So perceptrons can’t even solve something as simple as *_XOR_*. But what if we made a network out of two Perceptrons? If one perceptron can solve *_OR_* and one perceptron can solve *_NOT AND_*, then two perceptrons combined can solve *_XOR_*.
[[chapter10_figure14]]
-image::imgs/chapter10/ch10_14.png[Figure 10.14]
+image::imgs/chapter10/ch10_14.png[alt="Figure 10.14"]
[notetoself]*[Missing 10.14]*
The above diagram is known as a multi-layered Perceptron, a network of many neurons. Some are input neurons and receive the inputs; some are part of what’s called a “hidden” layer (as they are connected to neither the inputs or outputs of the network directly); and then there are the output neurons, from which we read the results.
@@ -822,8 +822,8 @@ Instead, here we’ll focus on a code framework for building the visual architec
Our goal will be to create the following simple network diagram:
-[[chapter10_figure15]]
-image::imgs/chapter10/ch10_15.png[Figure 10.15]
+[[chapter10_figure15]
+image::imgs/chapter10/ch10_15.png[alt="Figure 10.15"]
The primary building block for the diagram is a neuron. A neuron is a simple object, an entity with an (x,y) location.
@@ -1159,7 +1159,7 @@ Along with the connection’s line, we can then draw a circle at that location:
This resembles the following:
[[chapter10_figure16]]
-image::imgs/chapter10/ch10_16.png[Figure 10.16]
+image::imgs/chapter10/ch10_16.png[alt="Figure 10.16"]
[note to self]*[Missing this illustration, also need to label A and B Neurons]*
Ok, so that’s how we might move something along the connection. But how do we know when to do so? We start this process the moment the Connection object receives the “feedforward” signal. We can keep track of this process by employing a simple boolean to know whether the connection is sending or not. Before, we had:
Deleted file not rendered

0 comments on commit b91c0a8

Please sign in to comment.