Numerous libraries in Python, R, and Go allow for the creation, training, and inference of neural networks. Most notably, TensorFlow and PyTorch. Unfortunately, much of the appreciation for numerical ingenuity and statistical intuition is lost when these tools are deployed off-the-shelf. Specifically, the easy-to-use, high-level interfaces of Keras (for TensorFlow) and PyTorch Lightning reduce the amount of written code drastically, at the expense of a user's brainpower to understand them. This tutorial aims to uncover fundamental principles by which these neural networks are implemented, is intended to restore and highlight the numerical costs of training and inference of such models. Implementation of various neural network architectures in various coding languages following various programming paradigms.
Let's start simple. Although somewhat outdated, the multilayer perceptron (MLP) is the most straightforward neural network architecture. Moreover, Python, with its numerical linear algebra library Numpy, arguably the most prominent, is a good starting point as a coding language. To eschew the overcomplication of issues related to class definitions, the code will work in a pseudo-functional fashion.