A Whirlwind Tour of ML
###IAP 2017 course at MIT
This course gives a high-level overview of diverse areas of machine learning. The goal is to introduce students to core concepts and techniques in ML, and provide enough of a primer on different sub-areas of ML so that students can choose the right approach for a given problem and explore interesting topics further.
The course covers an introduction to ML, Inference, Bayesian Methods and Neural Networks. Each class is taught by graduate students or post-docs at MIT working in the specific areas.
Session I: Introduction to ML
This session gives an overview of supervised and unsupervised learning, and an introduction to probabilistic graphical models.
Concepts: Loss functions, Linear regression, Logistic regression, SVMs, Decision trees, Random Forests, Clustering, PCA, Graphical Models, Variable Elimination
Taught by Manasi Vartak.
- MIT 6.867 Machine Learning
- Coursera Machine Learning
- MIT 9.520 Statistical Learning Theory
- CMU: Intro to Machine Learning
- Michael Jordan Review of Graphical Models
- Coursera Probabilistic Graphical Models
- Columbia University: Probabilistic Graphical Models
Session II: Inference
This session gives an overview of (approximate) inference for probabilistic graphical models.
Concepts: Gaussian Mixture Models, Variational Inference, Monte Carlo Sampling
- Tutorial on VI
- A Review of recent work on VI, Section 5
- Tutorial on Sampling methods
- A review (and really cool demos) of recent work on sampling
Session III: Bayesian Methods
This session gives a whirlwind tour of Bayesian Methods in ML.
Concepts: What does it mean to be Bayesian in ML, Why be Bayesian, Posterior Inference, Parameteric vs. Non-Parametric Bayes
Session IV: Neural Networks
This session gives an overview of neural networks, particularly as applied to computer vision.
Concepts: Neural Nets, Convolutional NNs, AlexNet, GoogleLeNet, Transfer learning