MITx_6.86x - Machine Learning with Python: from Linear Models to Deep Learning
Student's notes (2020 run)
Disclaimer: The following notes are a mesh of my own notes, selected transcripts, some useful forum threads and various course material. I do not claim any authorship of these notes, but at the same time any error could well be arising from my own interpretation of the material.
Contributions are really welcome. If you spot an error, want to specify something in a better way (English is not my primary language), add material or just have comments, you can clone, make your edits and make a pull request (preferred) or just open an issue.
(PDF versions may be slightly outdated)
For an implementation of the algorithms in Julia (a relatively recent language incorporating the best of R, Python and Matlab features with the efficiency of compiled languages like C or Fortran), see the companion repository "Beta Machine Learning Toolkit" on GitHub or in myBinder to run the code online by yourself (and if you are looking for an introductory book on Julia, have a look on my one). BetaML currently implements:
- Linear, average and kernel Perceptron (units 1 and 2)
- Feed-forward Neural Networks (unit 3)
- Clustering (k-means, k-medoids and EM algorithm), recommandation system based on EM (unit 4)
- Decision Trees / Random Forest (mentioned on unit 2)