Skip to content

7shoe/Neural_Nets_from_Scratch

Folders and files

NameName
Last commit message
Last commit date

Latest commit

 

History

3 Commits
 
 

Repository files navigation

Neural Networks from Scratch

Numerous libraries in Python, R, and Go allow for the creation, training, and inference of neural networks. Most notably, TensorFlow and PyTorch. Unfortunately, much of the appreciation for numerical ingenuity and statistical intuition is lost when these tools are deployed off-the-shelf. Specifically, the easy-to-use, high-level interfaces of Keras (for TensorFlow) and PyTorch Lightning reduce the amount of written code drastically, at the expense of a user's brainpower to understand them. This tutorial aims to uncover fundamental principles by which these neural networks are implemented, is intended to restore and highlight the numerical costs of training and inference of such models. Implementation of various neural network architectures in various coding languages following various programming paradigms.

1. A (Pseudo-)Functional Programming approach in Python for a Multilayer Perceptron

Let's start simple. Although somewhat outdated, the multilayer perceptron (MLP) is the most straightforward neural network architecture. Moreover, Python, with its numerical linear algebra library Numpy, arguably the most prominent, is a good starting point as a coding language. To eschew the overcomplication of issues related to class definitions, the code will work in a pseudo-functional fashion.

About

Implementation of various neural networks architectures in various coding languages following various programming paradigms.

Resources

Stars

Watchers

Forks

Releases

No releases published

Packages

 
 
 

Contributors