From 69a1706de059ba39cdebbb753693e1ad09a07b7e Mon Sep 17 00:00:00 2001 From: Daniel Shiffman Date: Wed, 20 Apr 2016 11:13:37 -0400 Subject: [PATCH] remove   from figcaption #63 --- chapters/01_vectors.html | 20 +++++------ chapters/02_forces.html | 14 ++++---- chapters/03_oscillation.html | 30 ++++++++-------- chapters/04_particles.html | 6 ++-- chapters/05_physicslib.html | 30 ++++++++-------- chapters/06_steering.html | 70 ++++++++++++++++++------------------ chapters/07_ca.html | 32 ++++++++--------- chapters/08_fractals.html | 40 ++++++++++----------- chapters/09_ga.html | 26 +++++++------- chapters/10_nn.html | 30 ++++++++-------- regex-notes.txt | 2 +- 11 files changed, 150 insertions(+), 150 deletions(-) diff --git a/chapters/01_vectors.html b/chapters/01_vectors.html index 900ddcc9..02dfa999 100644 --- a/chapters/01_vectors.html +++ b/chapters/01_vectors.html @@ -147,7 +147,7 @@

1.2 Vectors for Processing Programmers

Here are some vectors and possible translations:

Figure 1.2 -
Figure 1.2 
+
Figure 1.2
@@ -170,7 +170,7 @@

1.2 Vectors for Processing Programmers

You’ve probably done this before when programming motion. For every frame of animation (i.e. a single cycle through Processing’s draw() loop), you instruct each object on the screen to move a certain number of pixels horizontally and a certain number of pixels vertically.

Figure 1.3 -
Figure 1.3 
+
Figure 1.3

For every frame:

@@ -184,7 +184,7 @@

1.2 Vectors for Processing Programmers

Nevertheless, another way to describe a location is the path taken from the origin to reach that location. Hence, a location can be the vector representing the difference between location and origin.

Figure 1.4 -
Figure 1.4 
+
Figure 1.4

Let’s examine the underlying data for both location and velocity. In the bouncing ball example, we had the following:

@@ -277,13 +277,13 @@

1.3 Vector Addition

Let’s say I have the following two vectors:

Figure 1.5 -
Figure 1.5 
+
Figure 1.5

Each vector has two components, an x and a y. To add two vectors together, we simply add both xs and both ys.

Figure 1.6 -
Figure 1.6 
+
Figure 1.6

In other words:

@@ -621,7 +621,7 @@

Vector multiplication

}
Figure 1.9 -
Figure 1.9 
+
Figure 1.9
@@ -725,7 +725,7 @@

1.6 Normalizing Vectors

Calculating the magnitude of a vector is only the beginning. The magnitude function opens the door to many possibilities, the first of which is normalization. Normalizing refers to the process of making something “standard” or, well, “normal.” In the case of vectors, let’s assume for the moment that a standard vector has a length of 1. To normalize a vector, therefore, is to take a vector of any length and, keeping it pointing in the same direction, change its length to 1, turning it into what is called a unit vector.

Figure 1.12 -
Figure 1.12 
+
Figure 1.12

Since it describes a vector’s direction without regard to its length, it’s useful to have the unit vector readily accessible. We’ll see this come in handy once we start to work with forces in Chapter 2.

@@ -739,7 +739,7 @@

1.6 Normalizing Vectors

Figure 1.13 -
Figure 1.13 
+
Figure 1.13

In the PVector class, we therefore write our normalization function as follows:

@@ -1298,7 +1298,7 @@

1.10 Interactivity with Acceleration

Figure 1.14 -
Figure 1.14 
+
Figure 1.14

To finish out this chapter, let’s try something a bit more complex and a great deal more useful. We’ll dynamically calculate an object’s acceleration according to a rule stated in Algorithm #3 — the object accelerates towards the mouse.

@@ -1306,7 +1306,7 @@

1.10 Interactivity with Acceleration

Anytime we want to calculate a vector based on a rule or a formula, we need to compute two things: magnitude and direction. Let’s start with direction. We know the acceleration vector should point from the object’s location towards the mouse location. Let’s say the object is located at the point (x,y) and the mouse at (mouseX,mouseY).

Figure 1.15 -
Figure 1.15 
+
Figure 1.15

In Figure 1.15, we see that we can get a vector (dx,dy) by subtracting the object’s location from the mouse’s location.

diff --git a/chapters/02_forces.html b/chapters/02_forces.html index 6c48ac27..884d5ccc 100644 --- a/chapters/02_forces.html +++ b/chapters/02_forces.html @@ -84,7 +84,7 @@

Newton’s Third Law

And if you are wearing roller skates when you push on that truck?

Figure 2.2 -
Figure 2.2 
+
Figure 2.2

You’ll accelerate away from the truck, sliding along the road while the truck stays put. Why do you slide but not the truck? For one, the truck has a much larger mass (which we’ll get into with Newton’s second law). There are other forces at work too, namely the friction of the truck’s tires and your roller skates against the road.

@@ -645,7 +645,7 @@

2.7 Friction

Here’s the formula for friction:

Figure 2.3 -
Figure 2.3 
+
Figure 2.3
@@ -749,7 +749,7 @@
Exercise 2.4

2.8 Air and Fluid Resistance

Figure 2.4 -
Figure 2.4 
+
Figure 2.4
@@ -958,7 +958,7 @@

2.9 Gravitational Attraction

Figure 2.6 -
Figure 2.6 
+
Figure 2.6

Probably the most famous force of all is gravity. We humans on earth think of gravity as an apple hitting Isaac Newton on the head. Gravity means that stuff falls down. But this is only our experience of gravity. In truth, just as the earth pulls the apple towards it due to a gravitational force, the apple pulls the earth as well. The thing is, the earth is just so freaking big that it overwhelms all the other gravity interactions. Every object with mass exerts a gravitational force on every other object. And there is a formula for calculating the strengths of these forces, as depicted in Figure 2.6.

@@ -1006,7 +1006,7 @@

2.9 Gravitational Attraction

Given these assumptions, we want to compute PVector force, the force of gravity. We’ll do it in two parts. First, we’ll compute the direction of the force {unitr} in the formula above. Second, we’ll calculate the strength of the force according to the masses and distance.

Figure 2.7 -
Figure 2.7 
+
Figure 2.7

Remember in Chapter 1, when we figured out how to have an object accelerate towards the mouse? (See Figure 2.7.)

@@ -1031,7 +1031,7 @@

2.9 Gravitational Attraction

dir.mult(m);
Figure 2.8 -
Figure 2.8 
+
Figure 2.8

The only problem is that we don’t know the distance. G, mass1, and mass2 were all givens, but we’ll need to actually compute distance before the above code will work. Didn’t we just make a vector that points all the way from one location to another? Wouldn’t the length of that vector be the distance between two objects?

@@ -1058,7 +1058,7 @@

2.9 Gravitational Attraction

Now that we’ve worked out the math and the code for calculating an attractive force (emulating gravity), we need to turn our attention to applying this technique in the context of an actual Processing sketch. In Example 2.1, you may recall how we created a simple Mover object—a class with PVector’s location, velocity, and acceleration as well as an applyForce(). Let’s take this exact class and put it in a sketch with:

Figure 2.9 -
Figure 2.9 
+
Figure 2.9
Figure 5.11 -
Figure 5.11 
+
Figure 5.11
@@ -1791,7 +1791,7 @@
Exercise 5.8
bd.type = BodyType.KINEMATIC;
Figure 5.12 -
Figure 5.12 
+
Figure 5.12

Kinematic bodies can be controlled by the user by setting their velocity directly. For example, let’s say you want an object to follow a target (like your mouse). You could create a vector that points from a body’s location to a target.

@@ -2096,7 +2096,7 @@

5.14 A Brief Interlude—Integration Methods

The above methodology is known as Euler integration (named for the mathematician Leonhard Euler, pronounced “Oiler”) or the Euler method. It’s essentially the simplest form of integration and very easy to implement in our code (see the two lines above!) However, it is not necessarily the most efficient form, nor is it close to being the most accurate. Why is Euler inaccurate? Let’s think about it this way. When you drive a car down the road pressing the gas pedal with your foot and accelerating, does the car sit in one location at time equals one second, then disappear and suddenly reappear in a new location at time equals two seconds, and do the same thing for three seconds, and four, and five? No, of course not. The car moves continuously down the road. But what’s happening in our Processing sketch? A circle is at one location at frame 0, another at frame 1, another at frame 2. Sure, at thirty frames per second, we’re seeing the illusion of motion. But we only calculate a new location every N units of time, whereas the real world is perfectly continuous. This results in some inaccuracies, as shown in the diagram below:

Figure 5.13 -
Figure 5.13 
+
Figure 5.13

The “real world” is the curve; Euler simulation is the series of line segments.

@@ -2645,7 +2645,7 @@

5.18 Connected Systems, Part I: String

The above example, two particles connected with a single spring, is the core building block for what toxiclibs’ physics is particularly well suited for: soft body simulations. For example, a string can be simulated by connecting a line of particles with springs. A blanket can be simulated by connecting a grid of particles with springs. And a cute, cuddly, squishy cartoon character can be simulated by a custom layout of particles connected with springs.

Figure 5.14 -
Figure 5.14 
+
Figure 5.14

Let’s begin by simulating a “soft pendulum”—a bob hanging from a string, instead of a rigid arm like we had in Chapter 3, Example 10. Let’s use the "string" in Figure 5.14 above as our model.

@@ -2658,7 +2658,7 @@

5.18 Connected Systems, Part I: String

Now, let’s say we want to have 20 particles, all spaced 10 pixels apart.

Figure 5.15 -
Figure 5.15 
+
Figure 5.15
@@ -2682,7 +2682,7 @@ 

5.18 Connected Systems, Part I: String

Now for the fun part: It’s time to connect all the particles. Particle 1 will be connected to particle 0, particle 2 to particle 1, 3 to 2, 4 to 3, etc.

Figure 5.16 -
Figure 5.16 
+
Figure 5.16

In other words, particle i needs to be connected to particle i-1 (except for when i equals zero).

diff --git a/chapters/06_steering.html b/chapters/06_steering.html index 1ecd411a..35a63033 100644 --- a/chapters/06_steering.html +++ b/chapters/06_steering.html @@ -98,7 +98,7 @@

6.3 The Steering Force

We can entertain ourselves by discussing the theoretical principles behind autonomous agents and steering as much as we like, but we can’t get anywhere without first understanding the concept of a steering force. Consider the following scenario. A vehicle moving with velocity desires to seek a target.

Figure 6.1 -
Figure 6.1 
+
Figure 6.1

Its goal and subsequent action is to seek the target in Figure 6.1. If you think back to Chapter 2, you might begin by making the target an attractor and apply a gravitational force that pulls the vehicle to the target. This would be a perfectly reasonable solution, but conceptually it’s not what we’re looking for here. We don’t want to simply calculate a force that pushes the vehicle towards its target; rather, we are asking the vehicle to make an intelligent decision to steer towards the target based on its perception of its state and environment (i.e. how fast and in what direction is it currently moving). The vehicle should look at how it desires to move (a vector pointing to the target), compare that goal with how quickly it is currently moving (its velocity), and apply a force accordingly.

@@ -115,7 +115,7 @@

6.3 The Steering Force

In the above formula, velocity is no problem. After all, we’ve got a variable for that. However, we don’t have the desired velocity; this is something we have to calculate. Let’s take a look at Figure 6.2. If we’ve defined the vehicle’s goal as “seeking the target,” then its desired velocity is a vector that points from its current location to the target location.

Figure 6.2 -
Figure 6.2 
+
Figure 6.2

Assuming a PVector target, we then have:

@@ -145,7 +145,7 @@

6.3 The Steering Force

desired.mult(maxspeed);
Figure 6.3 -
Figure 6.3 
+
Figure 6.3

Putting this all together, we can write a function called seek() that receives a PVector target and calculates a steering force towards that target.

@@ -170,7 +170,7 @@

6.3 The Steering Force

So why does this all work so well? Let’s see what the steering force looks like relative to the vehicle and target locations.

Figure 6.4 -
Figure 6.4 
+
Figure 6.4

Again, notice how this is not at all the same force as gravitational attraction. Remember one of our principles of autonomous agents: An autonomous agent has a limited ability to perceive its environment. Here is that ability, subtly embedded into Reynolds’s steering formula. If the vehicle weren’t moving at all (zero velocity), desired minus velocity would be equal to desired. But this is not the case. The vehicle is aware of its own velocity and its steering force compensates accordingly. This creates a more active simulation, as the way in which the vehicle moves towards the targets depends on the way it is moving in the first place.

@@ -207,7 +207,7 @@

6.3 The Steering Force

Limiting the steering force brings up an important point. We must always remember that it’s not actually our goal to get the vehicle to the target as fast as possible. If that were the case, we would just say “location equals target” and there the vehicle would be. Our goal, as Reynolds puts it, is to move the vehicle in a “lifelike and improvisational manner.” We’re trying to make it appear as if the vehicle is steering its way to the target, and so it’s up to us to play with the forces and variables of the system to simulate a given behavior. For example, a large maximum steering force would result in a very different path than a small one. One is not inherently better or worse than the other; it depends on your desired effect. (And of course, these values need not be fixed and could change based on other conditions. Perhaps a vehicle has health: the higher the health, the better it can steer.)

Figure 6.5 -
Figure 6.5 
+
Figure 6.5

Here is the full Vehicle class, incorporating the rest of the elements from the Chapter 2 Mover object.

@@ -318,7 +318,7 @@

6.4 Arriving Behavior

The vehicle is so gosh darn excited about getting to the target that it doesn’t bother to make any intelligent decisions about its speed relative to the target’s proximity. Whether it’s far away or very close, it always wants to go as fast as possible.

Figure 6.6 -
Figure 6.6 
+
Figure 6.6

In some cases, this is the desired behavior (if a missile is flying at a target, it should always travel at maximum speed.) However, in many other cases (a car pulling into a parking spot, a bee landing on a flower), the vehicle’s thought process needs to consider its speed relative to the distance from its target. For example:

@@ -331,7 +331,7 @@

6.4 Arriving Behavior

Frame 6: I’m there. I want to stop!

Figure 6.7 -
Figure 6.7 
+
Figure 6.7

How can we implement this “arriving” behavior in code? Let’s return to our seek() function and find the line of code where we set the magnitude of the desired velocity.

@@ -344,13 +344,13 @@

6.4 Arriving Behavior

In Example 6.1, the magnitude of the desired vector is always “maximum” speed.

Figure 6.8 -
Figure 6.8 
+
Figure 6.8

What if we instead said the desired velocity is equal to half the distance?

Figure 6.9 -
Figure 6.9 
+
Figure 6.9
@@ -366,7 +366,7 @@ 

6.4 Arriving Behavior

Reynolds describes a more sophisticated approach. Let’s imagine a circle around the target with a given radius. If the vehicle is within that circle, it slows down—at the edge of the circle, its desired speed is maximum speed, and at the target itself, its desired speed is 0.

Figure 6.10 -
Figure 6.10 
+
Figure 6.10

In other words, if the distance from the target is less than r, the desired speed is between 0 and maximum speed mapped according to that distance.

@@ -413,7 +413,7 @@

6.4 Arriving Behavior

The steering force, therefore, is essentially a manifestation of the current velocity’s error: "I’m supposed to be going this fast in this direction, but I’m actually going this fast in another direction. My error is the difference between where I want to go and where I am currently going." Taking that error and applying it as a steering force results in more dynamic, lifelike simulations. With gravitational attraction, you would never have a force pointing away from the target, no matter how close. But with arriving via steering, if you are moving too fast towards the target, the error would actually tell you to slow down!

Figure 6.11 -
Figure 6.11 
+
Figure 6.11
@@ -431,7 +431,7 @@

6.5 Your Own Desires: Desired Velocity

“Wandering is a type of random steering which has some long term order: the steering direction on one frame is related to the steering direction on the next frame. This produces more interesting motion than, for example, simply generating a random steering direction each frame.” —Craig Reynolds
Figure 6.12 -
Figure 6.12 
+
Figure 6.12
@@ -459,7 +459,7 @@
Exercise 6.4

If a vehicle comes within a distance d of a wall, it desires to move at maximum speed in the opposite direction of the wall.

Figure 6.13 -
Figure 6.13 
+
Figure 6.13

If we define the walls of the space as the edges of a Processing window and the distance d as 25, the code is rather simple.

@@ -496,7 +496,7 @@

6.6 Flow Fields

Now back to the task at hand. Let’s examine a couple more of Reynolds’s steering behaviors. First, flow field following. What is a flow field? Think of your Processing window as a grid. In each cell of the grid lives an arrow pointing in some direction—you know, a vector. As a vehicle moves around the screen, it asks, “Hey, what arrow is beneath me? That’s my desired velocity!”

Figure 6.14 -
Figure 6.14 
+
Figure 6.14

Reynolds’s flow field following example has the vehicle predicting its future location and following the vector at that spot, but for simplicity’s sake, we’ll have the vehicle simply look to the vector at its current location.

@@ -535,7 +535,7 @@

6.6 Flow Fields

Now that we’ve set up the flow field’s data structures, it’s time to compute the vectors in the flow field itself. How do we do that? However we feel like it! Perhaps we want to have every vector in the flow field pointing to the right.

Figure 6.15 -
Figure 6.15 
+
Figure 6.15
@@ -552,7 +552,7 @@ 

6.6 Flow Fields

Or perhaps we want the vectors to point in random directions.

Figure 6.16 -
Figure 6.16 
+
Figure 6.16
@@ -569,7 +569,7 @@ 

6.6 Flow Fields

Figure 6.17 -
Figure 6.17 
+
Figure 6.17
@@ -715,7 +715,7 @@ 

6.7 The Dot Product

Remember all the basic vector math we covered in Chapter 1? Add, subtract, multiply, and divide?

Figure 6.18 -
Figure 6.18 
+
Figure 6.18

Notice how in the above diagram, vector multiplication involves multiplying a vector by a scalar value. This makes sense; when we want a vector to be twice as large (but facing the same direction), we multiply it by 2. When we want it to be half the size, we multiply it by 0.5.

@@ -775,7 +775,7 @@

6.7 The Dot Product

{dotform} {equals} {maga} {mult} {magb} {mult} {costheta}

Figure 6.19 -
Figure 6.19 
+
Figure 6.19

Now, let’s start with the following problem. We have the vectors A and B:

@@ -854,7 +854,7 @@

6.8 Path Following

Before we work out the individual pieces, let’s take a look at the overall algorithm for path following, as defined by Reynolds.

Figure 6.20 -
Figure 6.20 
+
Figure 6.20
@@ -910,7 +910,7 @@

6.8 Path Following

Now, let’s assume we have a vehicle (as depicted below) outside of the path’s radius, moving with a velocity.

Figure 6.23 -
Figure 6.23 
+
Figure 6.23

The first thing we want to do is predict, assuming a constant velocity, where that vehicle will be in the future.

@@ -936,7 +936,7 @@

6.8 Path Following

So, how do we find the distance between a point and a line? This concept is key. The distance between a point and a line is defined as the length of the normal between that point and line. The normal is a vector that extends from that point and is perpendicular to the line.

Figure 6.24 -
Figure 6.24 
+
Figure 6.24

Let’s figure out what we do know. We know we have a vector (call it {vectora}) that extends from the path’s starting point to the vehicle’s predicted location.

@@ -952,7 +952,7 @@

6.8 Path Following

Now, with basic trigonometry, we know that the distance from the path’s start to the normal point is: |A| * cos(theta).

Figure 6.25 -
Figure 6.25 
+
Figure 6.25

If we knew theta, we could easily define that normal point as follows:

@@ -1018,7 +1018,7 @@

6.8 Path Following

This process is commonly known as “scalar projection.” |A| cos(θ) is the scalar projection of A onto B.

Figure 6.26 -
Figure 6.26 
+
Figure 6.26
@@ -1026,7 +1026,7 @@

6.8 Path Following

Once we have the normal point along the path, we have to decide whether the vehicle should steer towards the path and how. Reynolds’s algorithm states that the vehicle should only steer towards the path if it strays beyond the path (i.e., if the distance between the normal point and the predicted future location is greater than the path radius).

Figure 6.27 -
Figure 6.27 
+
Figure 6.27
@@ -1057,7 +1057,7 @@ 

6.8 Path Following

Since we know the vector that defines the path (we’re calling it “B”), we can implement Reynolds’s “point ahead on the path” without too much trouble.

Figure 6.28 -
Figure 6.28 
+
Figure 6.28
@@ -1117,7 +1117,7 @@ 

6.8 Path Following

Now, you may notice above that instead of using all that dot product/scalar projection code to find the normal point, we instead call a function: getNormalPoint(). In cases like this, it’s useful to break out the code that performs a specific task (finding a normal point) into a function that it can be used generically in any case where it is required. The function takes three PVectors: the first defines a point in Cartesian space and the second and third arguments define a line segment.

Figure 6.29 -
Figure 6.29 
+
Figure 6.29
@@ -1148,13 +1148,13 @@ 

6.9 Path Following with Multiple Segments

Figure 6.30 -
Figure 6.30 
+
Figure 6.30

We’ve built a great example so far, yes, but it’s pretty darn limiting. After all, what if we want our path to be something that looks more like:

Figure 6.31 -
Figure 6.31 
+
Figure 6.31

While it’s true that we could make this example work for a curved path, we’re much less likely to end up needing a cool compress on our forehead if we stick with line segments. In the end, we can always employ the same technique we discovered with Box2D—we can draw whatever fancy curved path we want and approximate it behind the scenes with simple geometric forms.

@@ -1168,7 +1168,7 @@

6.9 Path Following with Multiple Segments

To find the target, we need to find the normal to the line segment. But now that we have a series of line segments, we have a series of normal points (see above)! Which one do we choose? The solution we’ll employ is to pick the normal point that is (a) closest and (b) on the path itself.

Figure 6.32 -
Figure 6.32 
+
Figure 6.32

If we have a point and an infinitely long line, we’ll always have a normal. But, as in the path-following example, if we have a point and a line segment, we won’t necessarily find a normal that is on the line segment itself. So if this happens for any of the segments, we can disqualify those normals. Once we are left with normals that are on the path itself (only two in the above diagram), we simply pick the one that is closest to our vehicle’s location.

@@ -1417,7 +1417,7 @@

6.11 Group Behaviors (or: Let’s not run into each other)

}
Figure 6.33 -
Figure 6.33 
+
Figure 6.33
@@ -1425,7 +1425,7 @@

6.11 Group Behaviors (or: Let’s not run into each other)

Of course, this is just the beginning. The real work happens inside the separate() function itself. Let’s figure out how we want to define separation. Reynolds states: “Steer to avoid crowding.” In other words, if a given vehicle is too close to you, steer away from that vehicle. Sound familiar? Remember the seek behavior where a vehicle steers towards a target? Reverse that force and we have the flee behavior.

Figure 6.34 -
Figure 6.34 
+
Figure 6.34

But what if more than one vehicle is too close? In this case, we’ll define separation as the average of all the vectors pointing away from any close vehicles.

@@ -1704,7 +1704,7 @@

6.13 Flocking

Figure 6.35 -
Figure 6.35 
+
Figure 6.35

Just as we did with our separate and seek example, we’ll want our Boid objects to have a single function that manages all the above behaviors. We’ll call this function flock().

@@ -1767,7 +1767,7 @@

6.13 Flocking

In our alignment function, we’re taking the average velocity of all the boids, whereas we should really only be looking at the boids within a certain distance. That distance threshold is up to you, of course. You could design boids that can see only twenty pixels away or boids that can see a hundred pixels away.

Figure 6.36 -
Figure 6.36 
+
Figure 6.36

Much like we did with separation (only calculating a force for others within a certain distance), we’ll want to do the same with alignment (and cohesion).

@@ -1967,7 +1967,7 @@

6.14 Algorithmic Efficiency (or: Why does my $@(*%! run so slowly?)

What if we could divide the screen into a grid? We would take all 2,000 boids and assign each boid to a cell within that grid. We would then be able to look at each boid and compare it to its neighbors within that cell at any given moment. Imagine a 10 x 10 grid. In a system of 2,000 elements, on average, approximately 20 elements would be found in each cell (20 x 10 x 10 = 2,000). Each cell would then require 20 x 20 = 400 cycles. With 100 cells, we’d have 100 x 400 = 40,000 cycles, a massive savings over 4,000,000.

Figure 6.37 -
Figure 6.37 
+
Figure 6.37
diff --git a/chapters/07_ca.html b/chapters/07_ca.html index bd7e305d..a7174528 100644 --- a/chapters/07_ca.html +++ b/chapters/07_ca.html @@ -35,7 +35,7 @@

7.1 What Is a Cellular Automaton?

Figure 7.1 -
Figure 7.1 
+
Figure 7.1
@@ -59,13 +59,13 @@

7.2 Elementary Cellular Automata

1) Grid. The simplest grid would be one-dimensional: a line of cells.

Figure 7.2 -
Figure 7.2 
+
Figure 7.2

2) States. The simplest set of states (beyond having only one state) would be two states: 0 or 1.

Figure 7.3 -
Figure 7.3 
+
Figure 7.3
@@ -87,7 +87,7 @@

7.2 Elementary Cellular Automata

We haven’t yet discussed, however, what is perhaps the most important detail of how cellular automata work—time. We’re not really talking about real-world time here, but about the CA living over a period of time, which could also be called a generation and, in our case, will likely refer to the frame count of an animation. The figures above show us the CA at time equals 0 or generation 0. The questions we have to ask ourselves are: How do we compute the states for all cells at generation 1? And generation 2? And so on and so forth.

Figure 7.6 -
Figure 7.6 
+
Figure 7.6

Let’s say we have an individual cell in the CA, and let’s call it CELL. The formula for calculating CELL’s state at any given time t is as follows:

@@ -97,7 +97,7 @@

7.2 Elementary Cellular Automata

In other words, a cell’s new state is a function of all the states in the cell’s neighborhood at the previous moment in time (or during the previous generation). We calculate a new state value by looking at all the previous neighbor states.

Figure 7.7 -
Figure 7.7 
+
Figure 7.7

Now, in the world of cellular automata, there are many ways we could compute a cell’s state from a group of cells. Consider blurring an image. (Guess what? Image processing works with CA-like rules.) A pixel’s new state (i.e. its color) is the average of all of its neighbors’ colors. We could also say that a cell’s new state is the sum of all of its neighbors’ states. With Wolfram’s elementary CA, however, we can actually do something a bit simpler and seemingly absurd: We can look at all the possible configurations of a cell and its neighbor and define the state outcome for every possible configuration. It seems ridiculous—wouldn’t there be way too many possibilities for this to be practical? Let’s give it a try.

@@ -105,25 +105,25 @@

7.2 Elementary Cellular Automata

We have three cells, each with a state of 0 or 1. How many possible ways can we configure the states? If you love binary, you’ll notice that three cells define a 3-bit number, and how high can you count with 3 bits? Up to 8. Let’s have a look.

Figure 7.8 -
Figure 7.8 
+
Figure 7.8

Once we have defined all the possible neighborhoods, we need to define an outcome (new state value: 0 or 1) for each neighborhood configuration.

Figure 7.9 -
Figure 7.9 
+
Figure 7.9

The standard Wolfram model is to start generation 0 with all cells having a state of 0 except for the middle cell, which should have a state of 1.

Figure 7.10 -
Figure 7.10 
+
Figure 7.10

Referring to the ruleset above, let’s see how a given cell (we’ll pick the center one) would change from generation 0 to generation 1.

Figure 7.11 -
Figure 7.11 
+
Figure 7.11

Try applying the same logic to all of the cells above and fill in the empty cells.

@@ -182,7 +182,7 @@

7.3 How to Program an Elementary CA

This line of thinking, however, is not the road we will first travel. Later in this chapter, we will discuss why an object-oriented approach could prove valuable in developing a CA simulation, but to begin, we can work with a more elementary data structure. After all, what is an elementary CA but a list of 0s and 1s? Certainly, we could describe the following CA generation using an array:

Figure 7.17 -
Figure 7.17 
+
Figure 7.17
@@ -623,7 +623,7 @@ 

7.6 The Game of Life

Let’s look at how the Game of Life works. It won’t take up too much time or space, since we’ve covered the basics of CA already.

Figure 7.22 -
Figure 7.22 
+
Figure 7.22

First, instead of a line of cells, we now have a two-dimensional matrix of cells. As with the elementary CA, the possible states are 0 or 1. Only in this case, since we’re talking about “life," 0 means dead and 1 means alive.

@@ -667,7 +667,7 @@

7.6 The Game of Life

Let’s look at a few examples.

Figure 7.23 -
Figure 7.23 
+
Figure 7.23
@@ -677,19 +677,19 @@

7.6 The Game of Life

One of the exciting aspects of the Game of Life is that there are initial patterns that yield intriguing results. For example, some remain static and never change.

Figure 7.24 -
Figure 7.24 
+
Figure 7.24

There are patterns that oscillate back and forth between two states.

Figure 7.25 -
Figure 7.25 
+
Figure 7.25

And there are also patterns that from generation to generation move about the grid. (It’s important to note that the cells themselves aren’t actually moving, although we see the appearance of motion in the result as the cells turn on and off.)

Figure 7.26 -
Figure 7.26 
+
Figure 7.26

If you are interested in these patterns, there are several good “out of the box” Game of Life demonstrations online that allow you to configure the CA’s initial state and watch it run at varying speeds. Two examples you might want to examine are:

@@ -743,7 +743,7 @@

7.7 Programming the Game of Life

}
Figure 7.27 -
Figure 7.27 
+
Figure 7.27

OK. Before we can sort out how to actually calculate the new state, we need to know how we can reference each cell’s neighbor. In the case of the 1D CA, this was simple: if a cell index was i, its neighbors were i-1 and i+1. Here each cell doesn’t have a single index, but rather a column and row index: x,y. As shown in Figure 7.27, we can see that its neighbors are: (x-1,y-1) (x,y-1), (x+1,y-2), (x-1,y), (x+1,y), (x-1,y+1), (x,y+1), and (x+1,y+1).

diff --git a/chapters/08_fractals.html b/chapters/08_fractals.html index 96607e65..c5bd7d26 100644 --- a/chapters/08_fractals.html +++ b/chapters/08_fractals.html @@ -17,7 +17,7 @@

Chapter 8. Fractals

Once upon a time, I took a course in high school called “Geometry.” Perhaps you did too. You learned about shapes in one dimension, two dimensions, and maybe even three. What is the circumference of a circle? The area of a rectangle? The distance between a point and a line? Come to think of it, we’ve been studying geometry all along in this book, using vectors to describe the motion of bodies in Cartesian space. This sort of geometry is generally referred to as Euclidean geometry, after the Greek mathematician Euclid.

Figure 8.1 -
Figure 8.1 
+
Figure 8.1

For us nature coders, we have to ask the question: Can we describe our world with Euclidean geometry? The LCD screen I’m staring at right now sure looks like a rectangle. And the plum I ate this morning is circular. But what if I were to look further, and consider the trees that line the street, the leaves that hang off those trees, the lightning from last night’s thunderstorm, the cauliflower I ate for dinner, the blood vessels in my body, and the mountains and coastlines that cover land beyond New York City? Most of the stuff you find in nature cannot be described by the idealized geometrical forms of Euclidean geometry. So if we want to start building computational designs with patterns beyond the simple shapes ellipse(), rect(), and line(), it’s time for us to learn about the concepts behind and techniques for simulating the geometry of nature: fractals.

@@ -36,13 +36,13 @@

8.1 What Is a Fractal?

Let’s illustrate this definition with two simple examples. First, let’s think about a tree branching structure (for which we’ll write the code later):

Figure 8.3 -
Figure 8.3 
+
Figure 8.3

Notice how the tree in Figure 8.3 has a single root with two branches connected at its end. Each one of those branches has two branches at its end and those branches have two branches and so on and so forth. What if we were to pluck one branch from the tree and examine it on its own?

Figure 8.4 -
Figure 8.4 
+
Figure 8.4
@@ -66,7 +66,7 @@

8.1 What Is a Fractal?

In these graphs, the x-axis is time and the y-axis is the stock’s value. It’s not an accident that I omitted the labels, however. Graphs of stock market data are examples of fractals because they look the same at any scale. Are these graphs of the stock over one year? One day? One hour? There’s no way for you to know without a label. (Incidentally, graph A shows six months’ worth of data and graph B zooms into a tiny part of graph A, showing six hours.)

Figure 8.7 -
Figure 8.7 
+
Figure 8.7

This is an example of a stochastic fractal, meaning that it is built out of probabilities and randomness. Unlike the deterministic tree-branching structure, it is statistically self-similar. As we go through the examples in this chapter, we will look at both deterministic and stochastic techniques for generating fractal patterns.

@@ -165,7 +165,7 @@

8.2 Recursion

It may look crazy, but it works. Here are the steps that happen when factorial(4) is called.

Figure 8.9 -
Figure 8.9 
+
Figure 8.9

We can apply the same principle to graphics with interesting results, as we will see in many examples throughout this chapter. Take a look at this recursive function.

@@ -270,11 +270,11 @@

8.3 The Cantor Set with a Recursive Function

we’d get the following:

Figure 8.10 -
Figure 8.10 
+
Figure 8.10
Figure 8.11 -
Figure 8.11 
+
Figure 8.11

Now, the Cantor rule tells us to erase the middle third of that line, which leaves us with two lines, one from the beginning of the line to the one-third mark, and one from the two-thirds mark to the end of the line.

@@ -293,7 +293,7 @@

8.3 The Cantor Set with a Recursive Function

}
Figure 8.12 -
Figure 8.12 
+
Figure 8.12

While this is a fine start, such a manual approach of calling line() for each line is not what we want. It will get unwieldy very quickly, as we’d need four, then eight, then sixteen calls to line(). Yes, a for loop is our usual way around such a problem, but give that a try and you’ll see that working out the math for each iteration quickly proves inordinately complicated. Here is where recursion comes and rescues us.

@@ -365,13 +365,13 @@

8.4 The Koch Curve and the ArrayList Technique

To demonstrate this technique, let’s look at another famous fractal pattern, discovered in 1904 by Swedish mathematician Helge von Koch. Here are the rules. (Note that it starts the same way as the Cantor set, with a single line.)

Figure 8.13 -
Figure 8.13 
+
Figure 8.13

The result looks like:

Figure 8.14 -
Figure 8.14 
+
Figure 8.14
@@ -460,7 +460,7 @@

The “Monster” Curve

Remember the Game of Life cellular automata? In that simulation, we always kept track of two generations: current and next. When we were finished computing the next generation, next became current and we moved on to computing the new next generation. 
We are going to apply a similar technique here. We have an ArrayList that keeps track of the current set of KochLine objects (at the start of the program, there is only one). We will need a second ArrayList (let’s call it “next”) where we will place all the new KochLine objects that are generated from applying the Koch rules. For every KochLine object in the current ArrayList, four new KochLine objects are added to the next ArrayList. When we’re done, the next ArrayList becomes the current one.

Figure 8.15 -
Figure 8.15 
+
Figure 8.15

Here’s how the code will look:

@@ -489,7 +489,7 @@

The “Monster” Curve

By calling generate() over and over again (for example, each time the mouse is pressed), we recursively apply the Koch curve rules to the existing set of KochLine objects. 
Of course, the above omits the real “work” here, which is figuring out those rules. How do we break one line segment into four as described by the rules? While this can be accomplished with some simple arithmetic and trigonometry, since our KochLine object uses PVector, this is a nice opportunity for us to practice our vector math. Let’s establish how many points we need to compute for each KochLine object.

Figure 8.16 -
Figure 8.16 
+
Figure 8.16

As you can see from the above figure, we need five points (a, b, c, d, and e) to generate the new KochLine objects and make the new line segments (ab, cb, cd, and de).

@@ -545,7 +545,7 @@

The “Monster” Curve

Now let’s move on to points B and D. B is one-third of the way along the line segment and D is two-thirds. Here we can make a PVector that points from start to end and shrink it to one-third the length for B and two-thirds the length for D to find these points.

Figure 8.17 -
Figure 8.17 
+
Figure 8.17
@@ -572,7 +572,7 @@ 

The “Monster” Curve

The last point, C, is the most difficult one to find. However, if you recall that the angles of an equilateral triangle are all sixty degrees, this makes it a little bit easier. If we know how to find point B with a PVector one-third the length of the line, what if we were to rotate that same PVector sixty degrees and move along that vector from point B? We’d be at point C!

Figure 8.18 -
Figure 8.18 
+
Figure 8.18
@@ -663,7 +663,7 @@ 

8.5 Trees

The fractals we have examined in this chapter so far are deterministic, meaning they have no randomness and will always produce the identical outcome each time they are run. They are excellent demonstrations of classic fractals and the programming techniques behind drawing them, but are too precise to feel natural. In this next part of the chapter, I want to examine some techniques behind generating a stochastic (or non-deterministic) fractal. The example we’ll use is a branching tree. Let’s first walk through the steps to create a deterministic version. Here are our production rules:

Figure 8.19 -
Figure 8.19 
+
Figure 8.19

Again, we have a nice fractal with a recursive definition: A branch is a line with two branches connected to it.

@@ -682,7 +682,7 @@

8.5 Trees

translate(width/2,height);
Figure 8.20 -
Figure 8.20 
+
Figure 8.20

…followed by drawing a line upwards (Figure 8.20):

@@ -693,7 +693,7 @@

8.5 Trees

Once we’ve finished the root, we just need to translate to the end and rotate in order to draw the next branch. (Eventually, we’re going to need to package up what we’re doing right now into a recursive function, but let’s sort out the steps first.)

Figure 8.21 -
Figure 8.21 
+
Figure 8.21

Remember, when we rotate in Processing, we are always rotating around the point of origin, so here the point of origin must always be translated to the end of our current branch.

@@ -706,11 +706,11 @@

8.5 Trees

Now that we have a branch going to the right, we need one going to the left. We can use pushMatrix() to save the transformation state before we rotate, letting us call popMatrix() to restore that state and draw the branch to the left. Let’s look at all the code together.

Figure 8.22 -
Figure 8.22 
+
Figure 8.22
Figure 8.23 -
Figure 8.23 
+
Figure 8.23
@@ -1086,7 +1086,7 @@ 

8.6 L-systems

Look familiar? This is the Cantor set generated with an L-system.

Figure 8.25 -
Figure 8.25 
+
Figure 8.25

The following alphabet is often used with L-systems: “FG+-[]”, meaning:

diff --git a/chapters/09_ga.html b/chapters/09_ga.html index 1283dc5e..ff9c132d 100644 --- a/chapters/09_ga.html +++ b/chapters/09_ga.html @@ -59,7 +59,7 @@

9.2 Why Use Genetic Algorithms?

To help illustrate the traditional genetic algorithm, we are going to start with monkeys. No, not our evolutionary ancestors. We’re going to start with some fictional monkeys that bang away on keyboards with the goal of typing out the complete works of Shakespeare.

Figure 9.1 -
Figure 9.1 
+
Figure 9.1

The “infinite monkey theorem” is stated as follows: A monkey hitting keys randomly on a typewriter will eventually type the complete works of Shakespeare (given an infinite amount of time). The problem with this theory is that the probability of said monkey actually typing Shakespeare is so low that even if that monkey started at the Big Bang, it’s unbelievably unlikely we’d even have Hamlet at this point.

@@ -410,7 +410,7 @@

9.5 The Genetic Algorithm, Part II: Selection

Now it’s time for the wheel of fortune.

Figure 9.2 -
Figure 9.2 
+
Figure 9.2

Spin the wheel and you’ll notice that Element B has the highest chance of being selected, followed by A, then E, then D, and finally C. This probability-based selection according to fitness is an excellent approach. One, it guarantees that the highest-scoring elements will be most likely to reproduce. Two, it does not entirely eliminate any variation from the population. Unlike with the elitist method, even the lowest-scoring element (in this case C) has a chance to pass its information down to the next generation. It’s quite possible (and often the case) that even low-scoring elements have a tiny nugget of genetic code that is truly useful and should not entirely be eliminated from the population. For example, in the case of evolving “to be or not to be”, we might have the following elements.

@@ -441,7 +441,7 @@

9.6 The Genetic Algorithm, Part III: Reproduction

It’s now up to us to make a child phrase from these two. Perhaps the most obvious way (let’s call this the 50/50 method) would be to take the first two characters from A and the second two from B, leaving us with:

Figure 9.3 -
Figure 9.3 
+
Figure 9.3

A variation of this technique is to pick a random midpoint. In other words, we don’t have to pick exactly half of the code from each parent. We could sometimes end up with FLAY, and sometimes with FORY. This is preferable to the 50/50 approach, since we increase the variety of possibilities for the next generation.

@@ -465,7 +465,7 @@

9.6 The Genetic Algorithm, Part III: Reproduction

Once the child DNA has been created via crossover, we apply one final process before adding the child to the next generation—mutation. Mutation is an optional step, as there are some cases in which it is unnecessary. However, it exists because of the Darwinian principle of variation. We created an initial population randomly, making sure that we start with a variety of elements. However, there can only be so much variety when seeding the first generation, and mutation allows us to introduce additional variety throughout the evolutionary process itself.

Figure 9.6 -
Figure 9.6 
+
Figure 9.6
@@ -625,7 +625,7 @@

Step 2: Selection

It might be fun to do something ridiculous and actually program a simulation of a spinning wheel as depicted above. But this is quite unnecessary.

Figure 9.7 -
Figure 9.7 
+
Figure 9.7

Instead we can pick from the five options (ABCDE) according to their probabilities by filling an ArrayList with multiple instances of each parent. In other words, let’s say you had a bucket of wooden letters—30 As, 40 Bs, 5 Cs, 15 Ds, and 10 Es.

@@ -1130,13 +1130,13 @@

Key #2: The fitness function

To put it another way, let’s graph the fitness function.

Figure 9.8 -
Figure 9.8 
+
Figure 9.8

This is a linear graph; as the number of characters goes up, so does the fitness score. However, what if the fitness increased exponentially as the number of correct characters increased? Our graph could then look something like:

Figure 9.9 -
Figure 9.9 
+
Figure 9.9

The more correct characters, the even greater the fitness. We can achieve this type of result in a number of different ways. For example, we could say:

@@ -1367,11 +1367,11 @@

9.10 Evolving Forces: Smart Rockets

A population of rockets launches from the bottom of the screen with the goal of hitting a target at the top of the screen (with obstacles blocking a straight line path).

Figure 9.10 -
Figure 9.10 
+
Figure 9.10
Figure 9.11 -
Figure 9.11 
+
Figure 9.11

Each rocket is equipped with five thrusters of variable strength and direction. The thrusters don’t fire all at once and continuously; rather, they fire one at a time in a custom sequence.

@@ -1574,13 +1574,13 @@

9.10 Evolving Forces: Smart Rockets

PVector v = new PVector(random(-1,1),random(-1,1));
Figure 9.12 -
Figure 9.12 
+
Figure 9.12

This is perfectly fine and will likely do the trick. However, if we were to draw every single possible vector we might pick, the result would fill a square (see Figure 9.12). In this case, it probably doesn’t matter, but there is a slight bias to diagonals here given that a PVector from the center of a square to a corner is longer than a purely vertical or horizontal one.

Figure 9.13 -
Figure 9.13 
+
Figure 9.13

What would be better here is to pick a random angle and make a PVector of length one from that angle, giving us a circle (see Figure 9.13). This could be easily done with a quick polar to Cartesian conversion, but a quicker path to the result is just to use PVector's random2D().

@@ -1933,7 +1933,7 @@

9.12 Interactive Selection

Think of all the rating systems you’ve ever used. Could you evolve the perfect movie by scoring all films according to your Netflix ratings? The perfect singer according to American Idol voting?

Figure 9.14 -
Figure 9.14 
+
Figure 9.14

To illustrate this technique, we’re going to build a population of simple faces. Each face will have a set of properties: head size, head color, eye location, eye size, mouth color, mouth location, mouth width, and mouth height.

@@ -2188,7 +2188,7 @@

Genotype and Phenotype

The ability for a bloop to find food is tied to two variables—size and speed. Bigger bloops will find food more easily simply because their size will allow them to intersect with food locations more often. And faster bloops will find more food because they can cover more ground in a shorter period of time.

Figure 9.15 -
Figure 9.15 
+
Figure 9.15

Since size and speed are inversely related (large bloops are slow, small bloops are fast), we only need a genotype with a single number.

diff --git a/chapters/10_nn.html b/chapters/10_nn.html index fa51752a..8e5c929c 100644 --- a/chapters/10_nn.html +++ b/chapters/10_nn.html @@ -14,7 +14,7 @@

Chapter 10. Neural Networks

The human brain can be described as a biological neural network—an interconnected web of neurons transmitting elaborate patterns of electrical signals. Dendrites receive input signals and, based on those inputs, fire an output signal via an axon. Or something like that. How the human brain actually works is an elaborate and complex mystery, one that we certainly are not going to attempt to tackle in rigorous detail in this chapter.

Figure 10.1 -
Figure 10.1 
+
Figure 10.1

The good news is that developing engaging animated systems with code does not require scientific rigor or accuracy, as we’ve learned throughout this book. We can simply be inspired by the idea of brain function.

@@ -39,7 +39,7 @@

10.1 Artificial Neural Networks: Introduction and Application

The most common application of neural networks in computing today is to perform one of these “easy-for-a-human, difficult-for-a-machine” tasks, often referred to as pattern recognition. Applications range from optical character recognition (turning printed or handwritten scans into digital text) to facial recognition. We don’t have the time or need to use some of these more elaborate artificial intelligence algorithms here, but if you are interested in researching neural networks, I’d recommend the books Artificial Intelligence: A Modern Approach by Stuart J. Russell and Peter Norvig and AI for Game Developers by David M. Bourg and Glenn Seemann.

Figure 10.2 -
Figure 10.2 
+
Figure 10.2
@@ -217,7 +217,7 @@

10.3 Simple Pattern Recognition Using a Perceptron

Now that we understand the computational process of a perceptron, we can look at an example of one in action. We stated that neural networks are often used for pattern recognition applications, such as facial recognition. Even simple perceptrons can demonstrate the basics of classification, as in the following example.

Figure 10.4 -
Figure 10.4 
+
Figure 10.4

Consider a line in two-dimensional space. Points in that space can be classified as living on either one side of the line or the other. While this is a somewhat silly example (since there is clearly no need for a neural network; we can determine on which side a point lies with some simple algebra), it shows how a perceptron can be trained to recognize points on one side versus another.

@@ -227,7 +227,7 @@

10.3 Simple Pattern Recognition Using a Perceptron

The perceptron itself can be diagrammed as follows:

Figure 10.5 -
Figure 10.5 
+
Figure 10.5

We can see how there are two inputs (x and y), a weight for each input (weightx and weighty), as well as a processing neuron that generates the output.

@@ -237,7 +237,7 @@

10.3 Simple Pattern Recognition Using a Perceptron

To avoid this dilemma, our perceptron will require a third input, typically referred to as a bias input. A bias input always has the value of 1 and is also weighted. Here is our perceptron with the addition of the bias:

Figure 10.6 -
Figure 10.6 
+
Figure 10.6

Let’s go back to the point (0,0). Here are our inputs:

@@ -286,7 +286,7 @@

10.4 Coding the Perceptron

Presumably, we could now create a Perceptron object and ask it to make a guess for any given point.

Figure 10.7 -
Figure 10.7 
+
Figure 10.7
@@ -546,7 +546,7 @@ 

10.4 Coding the Perceptron

If the y value we are examining is above the line, it will be less than yline.

Figure 10.8 -
Figure 10.8 
+
Figure 10.8
@@ -667,7 +667,7 @@ 

10.5 A Steering Perceptron

Here’s our scenario. Let’s say we have a Processing sketch with an ArrayList of targets and a single vehicle.

Figure 10.9 -
Figure 10.9 
+
Figure 10.9

Let’s say that the vehicle seeks all of the targets. According to the principles of Chapter 6, we would next write a function that calculates a steering force towards each target, applying each force one at a time to the object’s acceleration. Assuming the targets are an ArrayList of PVector objects, it would look something like:

@@ -783,7 +783,7 @@

10.5 A Steering Perceptron

brain.train(forces,error);
Figure 10.10 -
Figure 10.10 
+
Figure 10.10

Here we are passing the brain a copy of all the inputs (which it will need for error correction) as well as an observation about its environment: a PVector that points from its current location to where it desires to be. This PVector essentially serves as the error—the longer the PVector, the worse the vehicle is performing; the shorter, the better.

@@ -937,7 +937,7 @@

10.6 It’s a “Network,” Remember?

Yes, a perceptron can have multiple inputs, but it is still a lonely neuron. The power of neural networks comes in the networking itself. Perceptrons are, sadly, incredibly limited in their abilities. If you read an AI textbook, it will say that a perceptron can only solve linearly separable problems. What’s a linearly separable problem? Let’s take a look at our first example, which determined whether points were on one side of a line or the other.

Figure 10.11 -
Figure 10.11 
+
Figure 10.11

On the left of Figure 10.11, we have classic linearly separable data. Graph all of the possibilities; if you can classify the data with a straight line, then it is linearly separable. On the right, however, is non-linearly separable data. You can’t draw a straight line to separate the black dots from the gray ones.

@@ -947,7 +947,7 @@

10.6 It’s a “Network,” Remember?

One of the simplest examples of a non-linearly separable problem is XOR, or “exclusive or.” We’re all familiar with AND. For A AND B to be true, both A and B must be true. With OR, either A or B can be true for A OR B to evaluate as true. These are both linearly separable problems. Let’s look at the solution space, a “truth table.”

Figure 10.12 -
Figure 10.12 
+
Figure 10.12

See how you can draw a line to separate the true outputs from the false ones?

@@ -955,7 +955,7 @@

10.6 It’s a “Network,” Remember?

XOR is the equivalent of OR and NOT AND. In other words, A XOR B only evaluates to true if one of them is true. If both are false or both are true, then we get false. Take a look at the following truth table.

Figure 10.13 -
Figure 10.13 
+
Figure 10.13

This is not linearly separable. Try to draw a straight line to separate the true outputs from the false ones—you can’t!

@@ -965,7 +965,7 @@

10.6 It’s a “Network,” Remember?

So perceptrons can’t even solve something as simple as XOR. But what if we made a network out of two perceptrons? If one perceptron can solve OR and one perceptron can solve NOT AND, then two perceptrons combined can solve XOR.

Figure 10.14 -
Figure 10.14 
+
Figure 10.14

The above diagram is known as a multi-layered perceptron, a network of many neurons. Some are input neurons and receive the inputs, some are part of what’s called a “hidden” layer (as they are connected to neither the inputs nor the outputs of the network directly), and then there are the output neurons, from which we read the results.

@@ -989,7 +989,7 @@

10.7 Neural Network Diagrams

Our goal will be to create the following simple network diagram:

Figure 10.15 -
Figure 10.15 
+
Figure 10.15

The primary building block for this diagram is a neuron. For the purpose of this example, the Neuron class describes an entity with an (x,y) location.

@@ -1304,7 +1304,7 @@

10.8 Animating Feed Forward

This resembles the following:

Figure 10.16 -
Figure 10.16 
+
Figure 10.16

OK, so that’s how we might move something along the connection. But how do we know when to do so? We start this process the moment the Connection object receives the “feedforward” signal. We can keep track of this process by employing a simple boolean to know whether the connection is sending or not. Before, we had:

diff --git a/regex-notes.txt b/regex-notes.txt index 93fdb893..78c5a8ed 100644 --- a/regex-notes.txt +++ b/regex-notes.txt @@ -20,5 +20,5 @@ figcaption fixes (.*?) $1$1 -Figure (\d+.\d+)  +Figure (\d+\.\d+)  Figure $1 \ No newline at end of file