Personally, I think the best way to learn is to build myself a small prototype based on what I have learn fromthe books/theory. The tinkering and debugging parts really gave me an intuition that is missing had I just do a proof of concept in paper.
Here is the list of jupyter notebooks I created, with theory explanation and actual code side-by-side. Hope these will be useful for someone with similar itention.
- Linear Regression - the basics of the basics. This notebook illustrates the solution in closed form using matrix in python.
- Logistic Regression - another bread and butter algorithm. Mostly useful for probability-type predictions (Churn rate, yes/no, certain classifications). Here I used gradient descent as an optimization method.
- Neural Network: Perceptron - the most simple type of NN.
- Neural Network: Multi-layer - a more elaborate type with a hidden layer in the middle. Illustrates backpropagation method with MNIST digit image dataset.