From 69a1706de059ba39cdebbb753693e1ad09a07b7e Mon Sep 17 00:00:00 2001
From: Daniel Shiffman Here are some vectors and possible translations: You’ve probably done this before when programming motion. For every frame of animation (i.e. a single cycle through Processing’s draw() loop), you instruct each object on the screen to move a certain number of pixels horizontally and a certain number of pixels vertically. For every frame: Nevertheless, another way to describe a location is the path taken from the origin to reach that location. Hence, a location can be the vector representing the difference between location and origin. Let’s examine the underlying data for both location and velocity. In the bouncing ball example, we had the following: Let’s say I have the following two vectors: Each vector has two components, an x and a y. To add two vectors together, we simply add both x In other words: Calculating the magnitude of a vector is only the beginning. The magnitude function opens the door to many possibilities, the first of which is normalization. Normalizing refers to the process of making something “standard” or, well, “normal.” In the case of vectors, let’s assume for the moment that a standard vector has a length of 1. To normalize a vector, therefore, is to take a vector of any length and, keeping it pointing in the same direction, change its length to 1, turning it into what is called a unit vector. Since it describes a vector’s direction without regard to its length, it’s useful to have the unit vector readily accessible. We’ll see this come in handy once we start to work with forces in Chapter 2. In the PVector class, we therefore write our normalization function as follows: To finish out this chapter, let’s try something a bit more complex and a great deal more useful. We’ll dynamically calculate an object’s acceleration according to a rule stated in Algorithm #3 — the object accelerates towards the mouse. Anytime we want to calculate a vector based on a rule or a formula, we need to compute two things: magnitude and direction. Let’s start with direction. We know the acceleration vector should point from the object’s location towards the mouse location. Let’s say the object is located at the point (x,y) and the mouse at (mouseX,mouseY). In Figure 1.15, we see that we can get a vector (dx,dy) by subtracting the object’s location from the mouse’s location. And if you are wearing roller skates when you push on that truck? You’ll accelerate away from the truck, sliding along the road while the truck stays put. Why do you slide but not the truck? For one, the truck has a much larger mass (which we’ll get into with Newton’s second law). There are other forces at work too, namely the friction of the truck’s tires and your roller skates against the road. Here’s the formula for friction: Probably the most famous force of all is gravity. We humans on earth think of gravity as an apple hitting Isaac Newton on the head. Gravity means that stuff falls down. But this is only our experience of gravity. In truth, just as the earth pulls the apple towards it due to a gravitational force, the apple pulls the earth as well. The thing is, the earth is just so freaking big that it overwhelms all the other gravity interactions. Every object with mass exerts a gravitational force on every other object. And there is a formula for calculating the strengths of these forces, as depicted in Figure 2.6. Given these assumptions, we want to compute PVector force, the force of gravity. We’ll do it in two parts. First, we’ll compute the direction of the force {unitr} in the formula above. Second, we’ll calculate the strength of the force according to the masses and distance. Remember in Chapter 1, when we figured out how to have an object accelerate towards the mouse? (See Figure 2.7.) The only problem is that we don’t know the distance. G, mass1, and mass2 were all givens, but we’ll need to actually compute distance before the above code will work. Didn’t we just make a vector that points all the way from one location to another? Wouldn’t the length of that vector be the distance between two objects? Now that we’ve worked out the math and the code for calculating an attractive force (emulating gravity), we need to turn our attention to applying this technique in the context of an actual Processing sketch. In Example 2.1, you may recall how we created a simple Mover object—a class with PVector The first order of business is to cover radians and degrees. You’re probably familiar with the concept of an angle in degrees. A full rotation goes from 0 to 360 degrees. 90 degrees (a right angle) is 1/4th of 360, shown below as two perpendicular lines. It’s fairly intuitive for us to think of angles in terms of degrees. For example, the square in Figure 3.2 is rotated 45 degrees around its center. Processing, however, requires angles to be specified in radians. A radian is a unit of measurement for angles defined by the ratio of the length of the arc of a circle to the radius of that circle. One radian is the angle at which that ratio equals one (see Figure 3.1). 180 degrees = PI radians, 360 degrees = 2*PI radians, 90 degrees = PI/2 radians, etc. I think it may be time. We’ve looked at angles, we’ve spun an object. It’s time for: sohcahtoa. Yes, sohcahtoa. This seemingly nonsensical word is actually the foundation for a lot of computer graphics work. A basic understanding of trigonometry is essential if you want to calculate an angle, figure out the distance between points, work with circles, arcs, or lines. And sohcahtoa is a mnemonic device (albeit a somewhat absurd one) for what the trigonometric functions sine, cosine, and tangent mean. Take a look at Figure 3.4 again. There’s no need to memorize it, but make sure you feel comfortable with it. Draw it again yourself. Now let’s draw it a slightly different way (Figure 3.5). You might notice that almost all of the shapes we’ve been drawing so far are circles. This is convenient for a number of reasons, one of which is that we don’t have to consider the question of rotation. Rotate a circle and, well, it looks exactly the same. However, there comes a time in all motion programmers’ lives when they want to draw something on the screen that points in the direction of movement. Perhaps you are drawing an ant, or a car, or a spaceship. And when we say "point in the direction of movement," what we are really saying is “rotate according to the velocity vector.” Velocity is a vector, with an x and a y component, but to rotate in Processing we need an angle, in radians. Let’s draw our trigonometry diagram one more time, with an object’s velocity vector (Figure 3.6). Now the above code is pretty darn close, and almost works. We still have a big problem, though. Let’s consider the two velocity vectors depicted below. Though superficially similar, the two vectors point in quite different directions—opposite directions, in fact! However, if we were to apply our formula to solve for the angle to each vector… Do you miss Newton’s laws of motion? I know I sure do. Well, lucky for you, it’s time to bring it all back home. After all, it’s been nice learning about triangles and tangents and waves, but really, the core of this book is about simulating the physics of moving bodies. Let’s take a look at how trigonometry can help us with this pursuit. A pendulum is a bob suspended from a pivot. Obviously a real-world pendulum would live in a 3D space, but we’re going to look at a simpler scenario, a pendulum in a 2D space—a Processing window (see Figure 3.10). Let’s zoom in on the right triangle from the pendulum diagram. …as well as a function display() to draw the pendulum in the window. This begs the question: “Um, where do we draw the pendulum?” We know the angle and the arm length, but how do we know the x,y (Cartesian!) coordinates for both the pendulum’s pivot point (let’s call it origin) and bob location (let’s call it location)? This may be getting a little tiring, but the answer, yet again, is trigonometry. In section 3.6, we looked at modeling simple harmonic motion by mapping the sine wave to a pixel range. Exercise 3.6 asked you to use this technique to create a simulation of a bob hanging from a spring. While using the sin() function is a quick-and-dirty, one-line-of-code way of getting something up and running, it won’t do if what we really want is to have a bob hanging from a spring in a two-dimensional space that responds to other forces in the environment (wind, gravity, etc.) To accomplish a simulation like this (one that is identical to the pendulum example, only now the arm is a springy connection), we need to model the forces of a spring using PVector. Now remember, force is a vector, so we need to calculate both magnitude and direction. Let’s look at one more diagram of the spring and label all the givens we might have in a Processing sketch. Let’s establish the following three variables as shown in Figure 3.16. Now that we’ve sorted out the elements necessary for the magnitude of the force (-1 * k * x), we need to figure out the direction, a unit vector pointing in the direction of the force. The good news is that we already have this vector. Right? Just a moment ago we thought to ourselves: “How we can calculate that distance? How about the magnitude of a vector that points from the anchor to the bob?” Well, that is the direction of the force! In Figure 3.17, we can see that if we stretch the spring beyond its rest length, there should be a force pulling it back towards the anchor. And if it shrinks below its rest length, the force should push it away from the anchor. This reversal of direction is accounted for in the formula with the -1. And so all we need to do is normalize the PVector we used for the distance calculation! Let’s take a look at the code and rename that PVector variable as “force.” One option would be to write out all of the spring force code in the main draw() loop. But thinking ahead to when you might have multiple bobs and multiple spring connections, it makes a good deal of sense to write an additional class, a Spring class. As shown in Figure 3.18, the Bob class keeps track of the movements of the bob; the Spring class keeps track of the spring’s anchor and its rest length and calculates the spring force on the bob. While removing elements from the ArrayList during a loop doesn’t cause the program to crash (as it does with adding), the problem is almost more insidious in that it leaves no evidence. To discover the problem we must first establish an important fact. When an object is removed from the ArrayList, all elements are shifted one spot to the left. Note the diagram below where particle C (index 2) is removed. Particles A and B keep the same index, while particles D and E shift from 3 and 4 to 2 and 3, respectively. Let’s pretend we are i looping through the ArrayList. Inheritance makes this all possible. With inheritance, classes can inherit properties (variables) and functionality (methods) from other classes. A Dog class is a child (subclass) of an Animal class. Children will automatically inherit all variables and functions from the parent (superclass), but can also include functions and variables not found in the parent. Like a phylogenetic "tree of life," inheritance follows a tree structure. Dogs inherit from canines, which inherit from mammals, which inherit from animals, etc. Both of these images were generated from identical algorithms. The only difference is that a white circle is drawn in image A for each particle and a “fuzzy” blob is drawn for each in B. If we’re thinking about shapes like rectangles or circles, question #1 isn’t too tough. You’ve likely encountered this before. For example, we know two circles are intersecting if the distance between them is less than the sum of their radii. OK. Now that we know how to determine if two circles are colliding, how do we calculate their velocities after the collision? This is where we’re going to stop our discussion. Why, you ask? It’s not that understanding the math behind collisions isn’t important or valuable. (In fact, I’m including additional examples on the website related to collisions without a physics library.) The reason for stopping is that life is short (let this also be a reason for you to consider going outside and frolicking instead of programming altogether). We can’t expect to master every detail of physics simulation. And while we could continue this discussion for circles, it’s only going to lead us to wanting to work with rectangles. And strangely shaped polygons. And curved surfaces. And swinging pendulums colliding with springy springs. And and and and and. Notice how in Box2D (0,0) is in the center and up is the positive direction along the y-axis! Box2D’s coordinate system is just like that lovely old-fashioned Cartesian one with (0,0) in the center and up pointing in a positive direction. Processing, on the other hand, uses a traditional computer graphics coordinate system where (0,0) is in the top left corner and down is the positive direction along the y-axis. This is why if we want objects to fall down with gravity, we need to give Box2D a gravity force with a negative y-value. Now, Box2D will keep a list of all the bodies that exist in the world. This can be accessed by calling the World object’s getBodyList() function. Nevertheless, what I’m going to demonstrate here is a technique for keeping your own body lists. Yes, this may be a bit redundant and we perhaps sacrifice a bit of efficiency. But we more than make up for that with ease of use. This methodology will allow us to program like we’re used to in Processing, and we can easily keep track of which bodies are which and render them appropriately. Let’s consider the structure of the following Processing sketch: This looks like any ol’ Processing sketch. We have a main tab called “Boxes” and a “Boundary” and a “Box” tab. Let’s think about the Box tab for a moment. The Box tab is where we will write a simple class to describe a Box object, a rectangular body in our world. Once we have the location and angle, it’s easy to display the object using translate() and rotate(). Note, however, that the Box2D coordinate system considers rotation in the opposite direction from Processing, so we need to multiply the angle by -1. Now that we’ve seen how easy it is to make simple geometric forms in Box2D, let’s imagine that you want to have a more complex form, such as a little alien stick figure. When building your own polygon in Box2D, you must remember two important details. The above looks pretty good, but sadly, if we run it, we’ll get the following result: When you attach a shape to a body, by default, the center of the shape will be located at the center of the body. But in our case, if we take the center of the rectangle to be the center of the body, we want the center of the circle to be offset along the y-axis from the body’s center. Box2D joints allow you to connect one body to another, enabling more advanced simulations of swinging pendulums, elastic bridges, squishy characters, wheels spinning on an axle, etc. There are many different kinds of Box2D joints. In this chapter we’re going to look at three: distance joints, revolute joints, and “mouse” joints. Kinematic bodies can be controlled by the user by setting their velocity directly. For example, let’s say you want an object to follow a target (like your mouse). You could create a vector that points from a body’s location to a target. The above methodology is known as Euler integration (named for the mathematician Leonhard Euler, pronounced “Oiler”) or the Euler method. It’s essentially the simplest form of integration and very easy to implement in our code (see the two lines above!) However, it is not necessarily the most efficient form, nor is it close to being the most accurate. Why is Euler inaccurate? Let’s think about it this way. When you drive a car down the road pressing the gas pedal with your foot and accelerating, does the car sit in one location at time equals one second, then disappear and suddenly reappear in a new location at time equals two seconds, and do the same thing for three seconds, and four, and five? No, of course not. The car moves continuously down the road. But what’s happening in our Processing sketch? A circle is at one location at frame 0, another at frame 1, another at frame 2. Sure, at thirty frames per second, we’re seeing the illusion of motion. But we only calculate a new location every N units of time, whereas the real world is perfectly continuous. This results in some inaccuracies, as shown in the diagram below: The “real world” is the curve; Euler simulation is the series of line segments. The above example, two particles connected with a single spring, is the core building block for what toxiclibs’ physics is particularly well suited for: soft body simulations. For example, a string can be simulated by connecting a line of particles with springs. A blanket can be simulated by connecting a grid of particles with springs. And a cute, cuddly, squishy cartoon character can be simulated by a custom layout of particles connected with springs. Let’s begin by simulating a “soft pendulum”—a bob hanging from a string, instead of a rigid arm like we had in Chapter 3, Example 10. Let’s use the "string" in Figure 5.14 above as our model. Now, let’s say we want to have 20 particles, all spaced 10 pixels apart. Now for the fun part: It’s time to connect all the particles. Particle 1 will be connected to particle 0, particle 2 to particle 1, 3 to 2, 4 to 3, etc. In other words, particle i needs to be connected to particle i-1 (except for when i equals zero). We can entertain ourselves by discussing the theoretical principles behind autonomous agents and steering as much as we like, but we can’t get anywhere without first understanding the concept of a steering force. Consider the following scenario. A vehicle moving with velocity desires to seek a target. Its goal and subsequent action is to seek the target in Figure 6.1. If you think back to Chapter 2, you might begin by making the target an attractor and apply a gravitational force that pulls the vehicle to the target. This would be a perfectly reasonable solution, but conceptually it’s not what we’re looking for here. We don’t want to simply calculate a force that pushes the vehicle towards its target; rather, we are asking the vehicle to make an intelligent decision to steer towards the target based on its perception of its state and environment (i.e. how fast and in what direction is it currently moving). The vehicle should look at how it desires to move (a vector pointing to the target), compare that goal with how quickly it is currently moving (its velocity), and apply a force accordingly. In the above formula, velocity is no problem. After all, we’ve got a variable for that. However, we don’t have the desired velocity; this is something we have to calculate. Let’s take a look at Figure 6.2. If we’ve defined the vehicle’s goal as “seeking the target,” then its desired velocity is a vector that points from its current location to the target location. Assuming a PVector target, we then have: Putting this all together, we can write a function called seek() that receives a PVector target and calculates a steering force towards that target. So why does this all work so well? Let’s see what the steering force looks like relative to the vehicle and target locations. Again, notice how this is not at all the same force as gravitational attraction. Remember one of our principles of autonomous agents: An autonomous agent has a limited ability to perceive its environment. Here is that ability, subtly embedded into Reynolds’s steering formula. If the vehicle weren’t moving at all (zero velocity), desired minus velocity would be equal to desired. But this is not the case. The vehicle is aware of its own velocity and its steering force compensates accordingly. This creates a more active simulation, as the way in which the vehicle moves towards the targets depends on the way it is moving in the first place. Limiting the steering force brings up an important point. We must always remember that it’s not actually our goal to get the vehicle to the target as fast as possible. If that were the case, we would just say “location equals target” and there the vehicle would be. Our goal, as Reynolds puts it, is to move the vehicle in a “lifelike and improvisational manner.” We’re trying to make it appear as if the vehicle is steering its way to the target, and so it’s up to us to play with the forces and variables of the system to simulate a given behavior. For example, a large maximum steering force would result in a very different path than a small one. One is not inherently better or worse than the other; it depends on your desired effect. (And of course, these values need not be fixed and could change based on other conditions. Perhaps a vehicle has health: the higher the health, the better it can steer.) Here is the full Vehicle class, incorporating the rest of the elements from the Chapter 2 Mover object. The vehicle is so gosh darn excited about getting to the target that it doesn’t bother to make any intelligent decisions about its speed relative to the target’s proximity. Whether it’s far away or very close, it always wants to go as fast as possible. In some cases, this is the desired behavior (if a missile is flying at a target, it should always travel at maximum speed.) However, in many other cases (a car pulling into a parking spot, a bee landing on a flower), the vehicle’s thought process needs to consider its speed relative to the distance from its target. For example:1.2 Vectors for Processing Programmers
1.2 Vectors for Processing Programmers
1.2 Vectors for Processing Programmers
1.3 Vector Addition
’
s and both y’
s.Vector multiplication
}
@@ -725,7 +725,7 @@ 1.6 Normalizing Vectors
1.6 Normalizing Vectors
1.10 Interactivity with Acceleration
1.10 Interactivity with Acceleration
Newton’s Third Law
2.7 Friction
Exercise 2.4
2.8 Air and Fluid Resistance
@@ -958,7 +958,7 @@ 2.9 Gravitational Attraction
2.9 Gravitational Attraction
2.9 Gravitational Attraction
dir.mult(m);
2.9 Gravitational Attraction
’s
location, velocity, and acceleration as well as an applyForce(). Let’s take this exact class and put it in a sketch with:
diff --git a/chapters/03_oscillation.html b/chapters/03_oscillation.html
index 2e264880..4e01ad12 100644
--- a/chapters/03_oscillation.html
+++ b/chapters/03_oscillation.html
@@ -25,13 +25,13 @@
3.1 Angles
3.1 Angles
3.3 Trigonometry
@@ -269,7 +269,7 @@
3.3 Trigonometry
3.4 Pointing in the Direction of Movement
3.4 Pointing in the Direction of Movement
3.9 Trigonometry and Forces: The Pendulum
3.9 Trigonometry and Forces: The Pendulum
3.9 Trigonometry and Forces: The Pendulum
}
3.10 Spring Forces
3.10 Spring Forces
3.10 Spring Forces
3.10 Spring Forces
}
4.3 The ArrayList
4.7 Inheritance Basics
4.13 Image Textures and Additive Blending
5.1 What Is Box2D and When Is It Useful?
5.4 Living in a Box2D World
5.7 Box2D and Processing: Reunited and It Feels So Good
Step 2: Link every Processing Box object with a Box2D Body object.
@@ -1186,7 +1186,7 @@
5.10 Complex Forms
5.10 Complex Forms
Exercise 5.4
5.11 Feeling Attached—Box2D Joints
Exercise 5.6
Exercise 5.8
bd.type = BodyType.KINEMATIC;
5.14 A Brief Interlude—Integration Methods
5.18 Connected Systems, Part I: String
5.18 Connected Systems, Part I: String
@@ -2682,7 +2682,7 @@
5.18 Connected Systems, Part I: String
6.3 The Steering Force
6.3 The Steering Force
6.3 The Steering Force
desired.mult(maxspeed);6.3 The Steering Force
6.3 The Steering Force
6.4 Arriving Behavior
6.4 Arriving Behavior
Frame 6: I’m there. I want to stop!
How can we implement this “arriving” behavior in code? Let’s return to our seek() function and find the line of code where we set the magnitude of the desired velocity.
@@ -344,13 +344,13 @@In Example 6.1, the magnitude of the desired vector is always “maximum” speed.
What if we instead said the desired velocity is equal to half the distance?
@@ -366,7 +366,7 @@6.4 Arriving Behavior
Reynolds describes a more sophisticated approach. Let’s imagine a circle around the target with a given radius. If the vehicle is within that circle, it slows down—at the edge of the circle, its desired speed is maximum speed, and at the target itself, its desired speed is 0.
In other words, if the distance from the target is less than r, the desired speed is between 0 and maximum speed mapped according to that distance.
@@ -413,7 +413,7 @@6.4 Arriving Behavior
The steering force, therefore, is essentially a manifestation of the current velocity’s error: "I’m supposed to be going this fast in this direction, but I’m actually going this fast in another direction. My error is the difference between where I want to go and where I am currently going." Taking that error and applying it as a steering force results in more dynamic, lifelike simulations. With gravitational attraction, you would never have a force pointing away from the target, no matter how close. But with arriving via steering, if you are moving too fast towards the target, the error would actually tell you to slow down!
@@ -431,7 +431,7 @@6.5 Your Own Desires: Desired Velocity
“Wandering is a type of random steering which has some long term order: the steering direction on one frame is related to the steering direction on the next frame. This produces more interesting motion than, for example, simply generating a random steering direction each frame.” —Craig Reynolds@@ -459,7 +459,7 @@Exercise 6.4
If a vehicle comes within a distance d of a wall, it desires to move at maximum speed in the opposite direction of the wall.
If we define the walls of the space as the edges of a Processing window and the distance d as 25, the code is rather simple.
@@ -496,7 +496,7 @@6.6 Flow Fields
Now back to the task at hand. Let’s examine a couple more of Reynolds’s steering behaviors. First, flow field following. What is a flow field? Think of your Processing window as a grid. In each cell of the grid lives an arrow pointing in some direction—you know, a vector. As a vehicle moves around the screen, it asks, “Hey, what arrow is beneath me? That’s my desired velocity!”
Reynolds’s flow field following example has the vehicle predicting its future location and following the vector at that spot, but for simplicity’s sake, we’ll have the vehicle simply look to the vector at its current location.
@@ -535,7 +535,7 @@6.6 Flow Fields
Now that we’ve set up the flow field’s data structures, it’s time to compute the vectors in the flow field itself. How do we do that? However we feel like it! Perhaps we want to have every vector in the flow field pointing to the right.
@@ -552,7 +552,7 @@6.6 Flow Fields
Or perhaps we want the vectors to point in random directions.
@@ -569,7 +569,7 @@6.6 Flow Fields
@@ -715,7 +715,7 @@6.7 The Dot Product
Remember all the basic vector math we covered in Chapter 1? Add, subtract, multiply, and divide?
Notice how in the above diagram, vector multiplication involves multiplying a vector by a scalar value. This makes sense; when we want a vector to be twice as large (but facing the same direction), we multiply it by 2. When we want it to be half the size, we multiply it by 0.5.
@@ -775,7 +775,7 @@6.7 The Dot Product
{dotform} {equals} {maga} {mult} {magb} {mult} {costheta}
Now, let’s start with the following problem. We have the vectors A and B:
@@ -854,7 +854,7 @@6.8 Path Following
Before we work out the individual pieces, let’s take a look at the overall algorithm for path following, as defined by Reynolds.
@@ -910,7 +910,7 @@6.8 Path Following
Now, let’s assume we have a vehicle (as depicted below) outside of the path’s radius, moving with a velocity.
The first thing we want to do is predict, assuming a constant velocity, where that vehicle will be in the future.
@@ -936,7 +936,7 @@6.8 Path Following
So, how do we find the distance between a point and a line? This concept is key. The distance between a point and a line is defined as the length of the normal between that point and line. The normal is a vector that extends from that point and is perpendicular to the line.
Let’s figure out what we do know. We know we have a vector (call it {vectora}) that extends from the path’s starting point to the vehicle’s predicted location.
@@ -952,7 +952,7 @@6.8 Path Following
Now, with basic trigonometry, we know that the distance from the path’s start to the normal point is: |A| * cos(theta).
If we knew theta, we could easily define that normal point as follows:
@@ -1018,7 +1018,7 @@6.8 Path Following
This process is commonly known as “scalar projection.” |A| cos(θ) is the scalar projection of A onto B.
@@ -1026,7 +1026,7 @@6.8 Path Following
Once we have the normal point along the path, we have to decide whether the vehicle should steer towards the path and how. Reynolds’s algorithm states that the vehicle should only steer towards the path if it strays beyond the path (i.e., if the distance between the normal point and the predicted future location is greater than the path radius).
@@ -1057,7 +1057,7 @@6.8 Path Following
Since we know the vector that defines the path (we’re calling it “B”), we can implement Reynolds’s “point ahead on the path” without too much trouble.
@@ -1117,7 +1117,7 @@6.8 Path Following
Now, you may notice above that instead of using all that dot product/scalar projection code to find the normal point, we instead call a function: getNormalPoint(). In cases like this, it’s useful to break out the code that performs a specific task (finding a normal point) into a function that it can be used generically in any case where it is required. The function takes three PVector
s
: the first defines a point in Cartesian space and the second and third arguments define a line segment.@@ -1148,13 +1148,13 @@@@ -1425,7 +1425,7 @@6.9 Path Following with Multiple Segments
We’ve built a great example so far, yes, but it’s pretty darn limiting. After all, what if we want our path to be something that looks more like:
While it’s true that we could make this example work for a curved path, we’re much less likely to end up needing a cool compress on our forehead if we stick with line segments. In the end, we can always employ the same technique we discovered with Box2D—we can draw whatever fancy curved path we want and approximate it behind the scenes with simple geometric forms.
@@ -1168,7 +1168,7 @@6.9 Path Following with Multiple Segments
To find the target, we need to find the normal to the line segment. But now that we have a series of line segments, we have a series of normal points (see above)! Which one do we choose? The solution we’ll employ is to pick the normal point that is (a) closest and (b) on the path itself.
If we have a point and an infinitely long line, we’ll always have a normal. But, as in the path-following example, if we have a point and a line segment, we won’t necessarily find a normal that is on the line segment itself. So if this happens for any of the segments, we can disqualify those normals. Once we are left with normals that are on the path itself (only two in the above diagram), we simply pick the one that is closest to our vehicle’s location.
@@ -1417,7 +1417,7 @@6.11 Group Behaviors (or: Let’s not run into each other)
}6.11 Group Behaviors (or: Let’s not run into each other)
Of course, this is just the beginning. The real work happens inside the separate() function itself. Let’s figure out how we want to define separation. Reynolds states: “Steer to avoid crowding.” In other words, if a given vehicle is too close to you, steer away from that vehicle. Sound familiar? Remember the seek behavior where a vehicle steers towards a target? Reverse that force and we have the flee behavior.
But what if more than one vehicle is too close? In this case, we’ll define separation as the average of all the vectors pointing away from any close vehicles.
@@ -1704,7 +1704,7 @@6.13 Flocking
Just as we did with our separate and seek example, we’ll want our Boid objects to have a single function that manages all the above behaviors. We’ll call this function flock().
@@ -1767,7 +1767,7 @@6.13 Flocking
In our alignment function, we’re taking the average velocity of all the boids, whereas we should really only be looking at the boids within a certain distance. That distance threshold is up to you, of course. You could design boids that can see only twenty pixels away or boids that can see a hundred pixels away.
Much like we did with separation (only calculating a force for others within a certain distance), we’ll want to do the same with alignment (and cohesion).
@@ -1967,7 +1967,7 @@6.14 Algorithmic Efficiency (or: Why does my $@(*%! run so slowly?)
What if we could divide the screen into a grid? We would take all 2,000 boids and assign each boid to a cell within that grid. We would then be able to look at each boid and compare it to its neighbors within that cell at any given moment. Imagine a 10 x 10 grid. In a system of 2,000 elements, on average, approximately 20 elements would be found in each cell (20 x 10 x 10 = 2,000). Each cell would then require 20 x 20 = 400 cycles. With 100 cells, we’d have 100 x 400 = 40,000 cycles, a massive savings over 4,000,000.
diff --git a/chapters/07_ca.html b/chapters/07_ca.html index bd7e305d..a7174528 100644 --- a/chapters/07_ca.html +++ b/chapters/07_ca.html @@ -35,7 +35,7 @@7.1 What Is a Cellular Automaton?
@@ -59,13 +59,13 @@7.2 Elementary Cellular Automata
1) Grid. The simplest grid would be one-dimensional: a line of cells.
2) States. The simplest set of states (beyond having only one state) would be two states: 0 or 1.
@@ -87,7 +87,7 @@7.2 Elementary Cellular Automata
We haven’t yet discussed, however, what is perhaps the most important detail of how cellular automata work—time. We’re not really talking about real-world time here, but about the CA living over a period of time, which could also be called a generation and, in our case, will likely refer to the frame count of an animation. The figures above show us the CA at time equals 0 or generation 0. The questions we have to ask ourselves are: How do we compute the states for all cells at generation 1? And generation 2? And so on and so forth.
Let’s say we have an individual cell in the CA, and let’s call it CELL. The formula for calculating CELL’s state at any given time t is as follows:
@@ -97,7 +97,7 @@7.2 Elementary Cellular Automata
In other words, a cell’s new state is a function of all the states in the cell’s neighborhood at the previous moment in time (or during the previous generation). We calculate a new state value by looking at all the previous neighbor states.
Now, in the world of cellular automata, there are many ways we could compute a cell’s state from a group of cells. Consider blurring an image. (Guess what? Image processing works with CA-like rules.) A pixel’s new state (i.e. its color) is the average of all of its neighbors’ colors. We could also say that a cell’s new state is the sum of all of its neighbors’ states. With Wolfram’s elementary CA, however, we can actually do something a bit simpler and seemingly absurd: We can look at all the possible configurations of a cell and its neighbor and define the state outcome for every possible configuration. It seems ridiculous—wouldn’t there be way too many possibilities for this to be practical? Let’s give it a try.
@@ -105,25 +105,25 @@7.2 Elementary Cellular Automata
We have three cells, each with a state of 0 or 1. How many possible ways can we configure the states? If you love binary, you’ll notice that three cells define a 3-bit number, and how high can you count with 3 bits? Up to 8. Let’s have a look.
Once we have defined all the possible neighborhoods, we need to define an outcome (new state value: 0 or 1) for each neighborhood configuration.
The standard Wolfram model is to start generation 0 with all cells having a state of 0 except for the middle cell, which should have a state of 1.
Referring to the ruleset above, let’s see how a given cell (we’ll pick the center one) would change from generation 0 to generation 1.
Try applying the same logic to all of the cells above and fill in the empty cells.
@@ -182,7 +182,7 @@7.3 How to Program an Elementary CA
This line of thinking, however, is not the road we will first travel. Later in this chapter, we will discuss why an object-oriented approach could prove valuable in developing a CA simulation, but to begin, we can work with a more elementary data structure. After all, what is an elementary CA but a list of 0s and 1s? Certainly, we could describe the following CA generation using an array:
@@ -623,7 +623,7 @@7.6 The Game of Life
Let’s look at how the Game of Life works. It won’t take up too much time or space, since we’ve covered the basics of CA already.
First, instead of a line of cells, we now have a two-dimensional matrix of cells. As with the elementary CA, the possible states are 0 or 1. Only in this case, since we’re talking about “life," 0 means dead and 1 means alive.
@@ -667,7 +667,7 @@7.6 The Game of Life
Let’s look at a few examples.
@@ -677,19 +677,19 @@7.6 The Game of Life
One of the exciting aspects of the Game of Life is that there are initial patterns that yield intriguing results. For example, some remain static and never change.
There are patterns that oscillate back and forth between two states.
And there are also patterns that from generation to generation move about the grid. (It’s important to note that the cells themselves aren’t actually moving, although we see the appearance of motion in the result as the cells turn on and off.)
If you are interested in these patterns, there are several good “out of the box” Game of Life demonstrations online that allow you to configure the CA’s initial state and watch it run at varying speeds. Two examples you might want to examine are:
@@ -743,7 +743,7 @@7.7 Programming the Game of Life
}OK. Before we can sort out how to actually calculate the new state, we need to know how we can reference each cell’s neighbor. In the case of the 1D CA, this was simple: if a cell index was i, its neighbors were i-1 and i+1. Here each cell doesn’t have a single index, but rather a column and row index: x,y. As shown in Figure 7.27, we can see that its neighbors are: (x-1,y-1) (x,y-1), (x+1,y-2), (x-1,y), (x+1,y), (x-1,y+1), (x,y+1), and (x+1,y+1).
diff --git a/chapters/08_fractals.html b/chapters/08_fractals.html index 96607e65..c5bd7d26 100644 --- a/chapters/08_fractals.html +++ b/chapters/08_fractals.html @@ -17,7 +17,7 @@Chapter 8. Fractals
Once upon a time, I took a course in high school called “Geometry.” Perhaps you did too. You learned about shapes in one dimension, two dimensions, and maybe even three. What is the circumference of a circle? The area of a rectangle? The distance between a point and a line? Come to think of it, we’ve been studying geometry all along in this book, using vectors to describe the motion of bodies in Cartesian space. This sort of geometry is generally referred to as Euclidean geometry, after the Greek mathematician Euclid.
For us nature coders, we have to ask the question: Can we describe our world with Euclidean geometry? The LCD screen I’m staring at right now sure looks like a rectangle. And the plum I ate this morning is circular. But what if I were to look further, and consider the trees that line the street, the leaves that hang off those trees, the lightning from last night’s thunderstorm, the cauliflower I ate for dinner, the blood vessels in my body, and the mountains and coastlines that cover land beyond New York City? Most of the stuff you find in nature cannot be described by the idealized geometrical forms of Euclidean geometry. So if we want to start building computational designs with patterns beyond the simple shapes ellipse(), rect(), and line(), it’s time for us to learn about the concepts behind and techniques for simulating the geometry of nature: fractals.
@@ -36,13 +36,13 @@8.1 What Is a Fractal?
Let’s illustrate this definition with two simple examples. First, let’s think about a tree branching structure (for which we’ll write the code later):
Notice how the tree in Figure 8.3 has a single root with two branches connected at its end. Each one of those branches has two branches at its end and those branches have two branches and so on and so forth. What if we were to pluck one branch from the tree and examine it on its own?
@@ -66,7 +66,7 @@8.1 What Is a Fractal?
In these graphs, the x-axis is time and the y-axis is the stock’s value. It’s not an accident that I omitted the labels, however. Graphs of stock market data are examples of fractals because they look the same at any scale. Are these graphs of the stock over one year? One day? One hour? There’s no way for you to know without a label. (Incidentally, graph A shows six months’ worth of data and graph B zooms into a tiny part of graph A, showing six hours.)
This is an example of a stochastic fractal, meaning that it is built out of probabilities and randomness. Unlike the deterministic tree-branching structure, it is statistically self-similar. As we go through the examples in this chapter, we will look at both deterministic and stochastic techniques for generating fractal patterns.
@@ -165,7 +165,7 @@8.2 Recursion
It may look crazy, but it works. Here are the steps that happen when factorial(4) is called.
We can apply the same principle to graphics with interesting results, as we will see in many examples throughout this chapter. Take a look at this recursive function.
@@ -270,11 +270,11 @@8.3 The Cantor Set with a Recursive Function
we’d get the following:
Now, the Cantor rule tells us to erase the middle third of that line, which leaves us with two lines, one from the beginning of the line to the one-third mark, and one from the two-thirds mark to the end of the line.
@@ -293,7 +293,7 @@8.3 The Cantor Set with a Recursive Function
}While this is a fine start, such a manual approach of calling line() for each line is not what we want. It will get unwieldy very quickly, as we’d need four, then eight, then sixteen calls to line(). Yes, a for loop is our usual way around such a problem, but give that a try and you’ll see that working out the math for each iteration quickly proves inordinately complicated. Here is where recursion comes and rescues us.
@@ -365,13 +365,13 @@8.4 The Koch Curve and the ArrayList Technique
To demonstrate this technique, let’s look at another famous fractal pattern, discovered in 1904 by Swedish mathematician Helge von Koch. Here are the rules. (Note that it starts the same way as the Cantor set, with a single line.)
The result looks like:
@@ -460,7 +460,7 @@The “Monster” Curve
Remember the Game of Life cellular automata? In that simulation, we always kept track of two generations: current and next. When we were finished computing the next generation, next became current and we moved on to computing the new next generation. We are going to apply a similar technique here. We have an ArrayList that keeps track of the current set of KochLine objects (at the start of the program, there is only one). We will need a second ArrayList (let’s call it “next”) where we will place all the new KochLine objects that are generated from applying the Koch rules. For every KochLine object in the current ArrayList, four new KochLine objects are added to the next ArrayList. When we’re done, the next ArrayList becomes the current one.
Here’s how the code will look:
@@ -489,7 +489,7 @@The “Monster” Curve
By calling generate() over and over again (for example, each time the mouse is pressed), we recursively apply the Koch curve rules to the existing set of KochLine objects. Of course, the above omits the real “work” here, which is figuring out those rules. How do we break one line segment into four as described by the rules? While this can be accomplished with some simple arithmetic and trigonometry, since our KochLine object uses PVector, this is a nice opportunity for us to practice our vector math. Let’s establish how many points we need to compute for each KochLine object.
As you can see from the above figure, we need five points (a, b, c, d, and e) to generate the new KochLine objects and make the new line segments (ab, cb, cd, and de).
@@ -545,7 +545,7 @@The “Monster” Curve
Now let’s move on to points B and D. B is one-third of the way along the line segment and D is two-thirds. Here we can make a PVector that points from start to end and shrink it to one-third the length for B and two-thirds the length for D to find these points.
@@ -572,7 +572,7 @@The “Monster” Curve
The last point, C, is the most difficult one to find. However, if you recall that the angles of an equilateral triangle are all sixty degrees, this makes it a little bit easier. If we know how to find point B with a PVector one-third the length of the line, what if we were to rotate that same PVector sixty degrees and move along that vector from point B? We’d be at point C!
@@ -663,7 +663,7 @@8.5 Trees
The fractals we have examined in this chapter so far are deterministic, meaning they have no randomness and will always produce the identical outcome each time they are run. They are excellent demonstrations of classic fractals and the programming techniques behind drawing them, but are too precise to feel natural. In this next part of the chapter, I want to examine some techniques behind generating a stochastic (or non-deterministic) fractal. The example we’ll use is a branching tree. Let’s first walk through the steps to create a deterministic version. Here are our production rules:
Again, we have a nice fractal with a recursive definition: A branch is a line with two branches connected to it.
@@ -682,7 +682,7 @@8.5 Trees
translate(width/2,height);…followed by drawing a line upwards (Figure 8.20):
@@ -693,7 +693,7 @@8.5 Trees
Once we’ve finished the root, we just need to translate to the end and rotate in order to draw the next branch. (Eventually, we’re going to need to package up what we’re doing right now into a recursive function, but let’s sort out the steps first.)
Remember, when we rotate in Processing, we are always rotating around the point of origin, so here the point of origin must always be translated to the end of our current branch.
@@ -706,11 +706,11 @@8.5 Trees
Now that we have a branch going to the right, we need one going to the left. We can use pushMatrix() to save the transformation state before we rotate, letting us call popMatrix() to restore that state and draw the branch to the left. Let’s look at all the code together.
@@ -1086,7 +1086,7 @@8.6 L-systems
Look familiar? This is the Cantor set generated with an L-system.
The following alphabet is often used with L-systems: “FG+-[]”, meaning:
diff --git a/chapters/09_ga.html b/chapters/09_ga.html index 1283dc5e..ff9c132d 100644 --- a/chapters/09_ga.html +++ b/chapters/09_ga.html @@ -59,7 +59,7 @@9.2 Why Use Genetic Algorithms?
To help illustrate the traditional genetic algorithm, we are going to start with monkeys. No, not our evolutionary ancestors. We’re going to start with some fictional monkeys that bang away on keyboards with the goal of typing out the complete works of Shakespeare.
The “infinite monkey theorem” is stated as follows: A monkey hitting keys randomly on a typewriter will eventually type the complete works of Shakespeare (given an infinite amount of time). The problem with this theory is that the probability of said monkey actually typing Shakespeare is so low that even if that monkey started at the Big Bang, it’s unbelievably unlikely we’d even have Hamlet at this point.
@@ -410,7 +410,7 @@9.5 The Genetic Algorithm, Part II: Selection
Now it’s time for the wheel of fortune.
Spin the wheel and you’ll notice that Element B has the highest chance of being selected, followed by A, then E, then D, and finally C. This probability-based selection according to fitness is an excellent approach. One, it guarantees that the highest-scoring elements will be most likely to reproduce. Two, it does not entirely eliminate any variation from the population. Unlike with the elitist method, even the lowest-scoring element (in this case C) has a chance to pass its information down to the next generation. It’s quite possible (and often the case) that even low-scoring elements have a tiny nugget of genetic code that is truly useful and should not entirely be eliminated from the population. For example, in the case of evolving “to be or not to be”, we might have the following elements.
@@ -441,7 +441,7 @@9.6 The Genetic Algorithm, Part III: Reproduction
It’s now up to us to make a child phrase from these two. Perhaps the most obvious way (let’s call this the 50/50 method) would be to take the first two characters from A and the second two from B, leaving us with:
A variation of this technique is to pick a random midpoint. In other words, we don’t have to pick exactly half of the code from each parent. We could sometimes end up with FLAY, and sometimes with FORY. This is preferable to the 50/50 approach, since we increase the variety of possibilities for the next generation.
@@ -465,7 +465,7 @@9.6 The Genetic Algorithm, Part III: Reproduction
Once the child DNA has been created via crossover, we apply one final process before adding the child to the next generation—mutation. Mutation is an optional step, as there are some cases in which it is unnecessary. However, it exists because of the Darwinian principle of variation. We created an initial population randomly, making sure that we start with a variety of elements. However, there can only be so much variety when seeding the first generation, and mutation allows us to introduce additional variety throughout the evolutionary process itself.
@@ -625,7 +625,7 @@Step 2: Selection
It might be fun to do something ridiculous and actually program a simulation of a spinning wheel as depicted above. But this is quite unnecessary.
Instead we can pick from the five options (ABCDE) according to their probabilities by filling an ArrayList with multiple instances of each parent. In other words, let’s say you had a bucket of wooden letters—30 As, 40 Bs, 5 Cs, 15 Ds, and 10 Es.
@@ -1130,13 +1130,13 @@Key #2: The fitness function
To put it another way, let’s graph the fitness function.
This is a linear graph; as the number of characters goes up, so does the fitness score. However, what if the fitness increased exponentially as the number of correct characters increased? Our graph could then look something like:
The more correct characters, the even greater the fitness. We can achieve this type of result in a number of different ways. For example, we could say:
@@ -1367,11 +1367,11 @@9.10 Evolving Forces: Smart Rockets
A population of rockets launches from the bottom of the screen with the goal of hitting a target at the top of the screen (with obstacles blocking a straight line path).
Each rocket is equipped with five thrusters of variable strength and direction. The thrusters don’t fire all at once and continuously; rather, they fire one at a time in a custom sequence.
@@ -1574,13 +1574,13 @@9.10 Evolving Forces: Smart Rockets
PVector v = new PVector(random(-1,1),random(-1,1));This is perfectly fine and will likely do the trick. However, if we were to draw every single possible vector we might pick, the result would fill a square (see Figure 9.12). In this case, it probably doesn’t matter, but there is a slight bias to diagonals here given that a PVector from the center of a square to a corner is longer than a purely vertical or horizontal one.
What would be better here is to pick a random angle and make a PVector of length one from that angle, giving us a circle (see Figure 9.13). This could be easily done with a quick polar to Cartesian conversion, but a quicker path to the result is just to use PVector's random2D().
@@ -1933,7 +1933,7 @@9.12 Interactive Selection
Think of all the rating systems you’ve ever used. Could you evolve the perfect movie by scoring all films according to your Netflix ratings? The perfect singer according to American Idol voting?
To illustrate this technique, we’re going to build a population of simple faces. Each face will have a set of properties: head size, head color, eye location, eye size, mouth color, mouth location, mouth width, and mouth height.
@@ -2188,7 +2188,7 @@Genotype and Phenotype
The ability for a bloop to find food is tied to two variables—size and speed. Bigger bloops will find food more easily simply because their size will allow them to intersect with food locations more often. And faster bloops will find more food because they can cover more ground in a shorter period of time.
Since size and speed are inversely related (large bloops are slow, small bloops are fast), we only need a genotype with a single number.
diff --git a/chapters/10_nn.html b/chapters/10_nn.html index fa51752a..8e5c929c 100644 --- a/chapters/10_nn.html +++ b/chapters/10_nn.html @@ -14,7 +14,7 @@Chapter 10. Neural Networks
The human brain can be described as a biological neural network—an interconnected web of neurons transmitting elaborate patterns of electrical signals. Dendrites receive input signals and, based on those inputs, fire an output signal via an axon. Or something like that. How the human brain actually works is an elaborate and complex mystery, one that we certainly are not going to attempt to tackle in rigorous detail in this chapter.
The good news is that developing engaging animated systems with code does not require scientific rigor or accuracy, as we’ve learned throughout this book. We can simply be inspired by the idea of brain function.
@@ -39,7 +39,7 @@10.1 Artificial Neural Networks: Introduction and Application
The most common application of neural networks in computing today is to perform one of these “easy-for-a-human, difficult-for-a-machine” tasks, often referred to as pattern recognition. Applications range from optical character recognition (turning printed or handwritten scans into digital text) to facial recognition. We don’t have the time or need to use some of these more elaborate artificial intelligence algorithms here, but if you are interested in researching neural networks, I’d recommend the books Artificial Intelligence: A Modern Approach by Stuart J. Russell and Peter Norvig and AI for Game Developers by David M. Bourg and Glenn Seemann.
@@ -217,7 +217,7 @@10.3 Simple Pattern Recognition Using a Perceptron
Now that we understand the computational process of a perceptron, we can look at an example of one in action. We stated that neural networks are often used for pattern recognition applications, such as facial recognition. Even simple perceptrons can demonstrate the basics of classification, as in the following example.
Consider a line in two-dimensional space. Points in that space can be classified as living on either one side of the line or the other. While this is a somewhat silly example (since there is clearly no need for a neural network; we can determine on which side a point lies with some simple algebra), it shows how a perceptron can be trained to recognize points on one side versus another.
@@ -227,7 +227,7 @@10.3 Simple Pattern Recognition Using a Perceptron
The perceptron itself can be diagrammed as follows:
We can see how there are two inputs (x and y), a weight for each input (weightx and weighty), as well as a processing neuron that generates the output.
@@ -237,7 +237,7 @@10.3 Simple Pattern Recognition Using a Perceptron
To avoid this dilemma, our perceptron will require a third input, typically referred to as a bias input. A bias input always has the value of 1 and is also weighted. Here is our perceptron with the addition of the bias:
Let’s go back to the point (0,0). Here are our inputs:
@@ -286,7 +286,7 @@10.4 Coding the Perceptron
Presumably, we could now create a Perceptron object and ask it to make a guess for any given point.
@@ -546,7 +546,7 @@10.4 Coding the Perceptron
If the y value we are examining is above the line, it will be less than yline.
@@ -667,7 +667,7 @@10.5 A Steering Perceptron
Here’s our scenario. Let’s say we have a Processing sketch with an ArrayList of targets and a single vehicle.
Let’s say that the vehicle seeks all of the targets. According to the principles of Chapter 6, we would next write a function that calculates a steering force towards each target, applying each force one at a time to the object’s acceleration. Assuming the targets are an ArrayList of PVector objects, it would look something like:
@@ -783,7 +783,7 @@10.5 A Steering Perceptron
brain.train(forces,error);Here we are passing the brain a copy of all the inputs (which it will need for error correction) as well as an observation about its environment: a PVector that points from its current location to where it desires to be. This PVector essentially serves as the error—the longer the PVector, the worse the vehicle is performing; the shorter, the better.
@@ -937,7 +937,7 @@10.6 It’s a “Network,” Remember?
Yes, a perceptron can have multiple inputs, but it is still a lonely neuron. The power of neural networks comes in the networking itself. Perceptrons are, sadly, incredibly limited in their abilities. If you read an AI textbook, it will say that a perceptron can only solve linearly separable problems. What’s a linearly separable problem? Let’s take a look at our first example, which determined whether points were on one side of a line or the other.
On the left of Figure 10.11, we have classic linearly separable data. Graph all of the possibilities; if you can classify the data with a straight line, then it is linearly separable. On the right, however, is non-linearly separable data. You can’t draw a straight line to separate the black dots from the gray ones.
@@ -947,7 +947,7 @@10.6 It’s a “Network,” Remember?
One of the simplest examples of a non-linearly separable problem is XOR, or “exclusive or.” We’re all familiar with AND. For A AND B to be true, both A and B must be true. With OR, either A or B can be true for A OR B to evaluate as true. These are both linearly separable problems. Let’s look at the solution space, a “truth table.”
See how you can draw a line to separate the true outputs from the false ones?
@@ -955,7 +955,7 @@10.6 It’s a “Network,” Remember?
XOR is the equivalent of OR and NOT AND. In other words, A XOR B only evaluates to true if one of them is true. If both are false or both are true, then we get false. Take a look at the following truth table.
This is not linearly separable. Try to draw a straight line to separate the true outputs from the false ones—you can’t!
@@ -965,7 +965,7 @@10.6 It’s a “Network,” Remember?
So perceptrons can’t even solve something as simple as XOR. But what if we made a network out of two perceptrons? If one perceptron can solve OR and one perceptron can solve NOT AND, then two perceptrons combined can solve XOR.
The above diagram is known as a multi-layered perceptron, a network of many neurons. Some are input neurons and receive the inputs, some are part of what’s called a “hidden” layer (as they are connected to neither the inputs nor the outputs of the network directly), and then there are the output neurons, from which we read the results.
@@ -989,7 +989,7 @@10.7 Neural Network Diagrams
Our goal will be to create the following simple network diagram:
The primary building block for this diagram is a neuron. For the purpose of this example, the Neuron class describes an entity with an (x,y) location.
@@ -1304,7 +1304,7 @@10.8 Animating Feed Forward
This resembles the following:
OK, so that’s how we might move something along the connection. But how do we know when to do so? We start this process the moment the Connection object receives the “feedforward” signal. We can keep track of this process by employing a simple boolean to know whether the connection is sending or not. Before, we had:
diff --git a/regex-notes.txt b/regex-notes.txt index 93fdb893..78c5a8ed 100644 --- a/regex-notes.txt +++ b/regex-notes.txt @@ -20,5 +20,5 @@ figcaption fixes $1 -Figure (\d+.\d+) +Figure (\d+\.\d+) Figure $1 \ No newline at end of file