This project demonstrates how to build basic feedforward neural networks from scratch using only NumPy โ no machine learning libraries like TensorFlow or PyTorch involved.
It includes two parts:
- A 2-layer neural network (input โ hidden โ output)
- A 1-layer neural network (input โ output)
These examples are perfect for educational purposes to understand how neural networks work under the hood.
Only one dependency is required:
numpyInstall with:
pip install numpyimport numpy as npX = np.array([[0, 0, 1],
[1, 1, 1],
[1, 0, 1],
[0, 1, 1]])
y = np.array([[0, 1, 1, 0]]).T- Input: 4 samples, each with 3 binary features
- Output: XOR-like pattern
The network architecture:
- Input layer: 3 neurons
- Hidden layer: 4 neurons
- Output layer: 1 neuron
- Activation Function: Sigmoid
Training loop runs for 6,000 iterations using basic forward and backward propagation.
X = np.array([[0, 0, 1],
[1, 1, 1],
[1, 0, 1],
[0, 1, 1]])
y = np.array([[0, 0, 1, 1]]).TThis version simplifies the network to just:
- Input layer: 3 neurons
- Output layer: 1 neuron
- Activation Function: Sigmoid
Training loop runs for 10,000 iterations.
- Forward propagation
- Sigmoid activation function and its derivative
- Backpropagation (gradient descent)
- Weight updates using error deltas
- Manual weight initialization
At the end of training, the model prints the output predictions for the dataset:
Output after training
[[0.00966449]
[0.00786506]
[0.99358911]
[0.99337711]]These values approach the target outputs [0, 0, 1, 1], showing the network has learned the task.
- Save the script as
neural_network.py - Run it with:
python neural_network.pyThis project is inspired by early neural network tutorials, particularly from:
- Andrew Traskโs blog: โA Neural Network in 11 Lines of Pythonโ
MIT License โ feel free to modify and use for your own learning or projects.