- Simple Perceptron
- Fully Connected Neural Network (one hidden layer) learning MNIST digits
- Animated Network Visualization
Additional Processing examples
- Nature of Code Chapter 10 Processing examples
- Charles Fried's Neural Network in Processing
- Another Processing Example
- Make Your Own Neural Network from Tariq Rashid
- Abishek's Tensorflow Example
- How to freeze a model and serve it with a python API
This short list thanks to Andrey Kurenkov's excellent 'Brief' History of Neural Nets and Deep Learning
- In 1943, Warren S. McCulloch, a neuroscientist, and Walter Pitts, a logician, developed the first conceptual model of an artificial neural network. In their paper, "A logical calculus of the ideas immanent in nervous activity,” they describe the concept of a neuron, a single cell living in a network of cells that receives inputs, processes those inputs, and generates an output.
- Hebb's Rule from The Organization of Behavior: A Neuropsychological Theory: "When an axon of cell A is near enough to excite a cell B and repeatedly or persistently takes part in firing it, some growth process or metabolic change takes place in one or both cells such that A's efficiency, as one of the cells firing B, is increased."
- Invented in 1957 by Frank Rosenblatt at the Cornell Aeronautical Laboratory (original paper), a perceptron is the simplest neural network possible: a computational model of a single neuron. A perceptron consists of one or more inputs, a processor, and a single output.
- In 1969, in their book Perceptrons Marvin Minksy and Seymour Papert demonstrate the limitations of perceptrons to solve only "linearly separable" problems. AI Winter #1!
- Paul Werbos's 1974 thesis Beyond Regression: New Tools for Prediction and Analysis in the Behavioral Sciences proposes "backpropagation" as a solution to adjusting weights in the hidden layers of a neural network. The technique was popularized in the 1986 paper Learning representations by back-propagating errors by David Rumelhart, Geoffrey Hinton, and Ronald Williams
- Neural Networks come back with Yann LeCunn's paper Backpropagation Applied to Handwritten Zip Code Recognition. Here's a 1993 video on convolutional neural networks. But AI Winter returns again with the "vanishing gradient problem."
- "Deep Learning" thaws the wintr with new methodologies for training: A fast learning algorithm for deep belief nets by Hinton, Osindero, Teh and raw power with GPUs: Large-scale Deep Unsupervised Learning using Graphics Processors
- Neural Networks (Nature of Code Chapter 10)
- A Quick Introduction to Neural Networks by Ujjwal Karn
- Let’s code a Neural Network from scratch by Charles Fried
- Rolf van Gelder's Neural Network in Processing
- Linear Algebra Cheatsheet by Brendan Fortuner
- A Step by Step Backpropagation Example by Matt Mazur
- A 'Brief' History of Neural Nets and Deep Learning by Andrey Kurenkov
- Make Your Own Neural Network by Tariq Rashid
- Chapter 22 of The Computational Beauty of Nature by Gary Flake
Linear Algebra Review
- Vectors vs. Matrices
- "Elementwise" operations
- Matrix multiplication
- inputs and outputs
- complex adaptive system
- activation function
- multi-layered perceptron
- input layer, hidden layer, output layer
- gradient descent
- deep learning
- Redo the three layer network example using an existing matrix library like math.js.
- Instead of using the supervised learning for any of the above examples, can you train a neural network to find the right weights by using a genetic algorithm?
- Visualize a neural network itself. You could start with just the simple perceptron or just go for drawing all the layers of the MNIST training example. How can you show the flow of information using color, geometry, etc.?
- Add a feature that allows the MNIST example to save and reload a model.
- Add a feature that allows users to add digits to the training or test set.
- Try the 3 layer network with your own data.