Skip to content

Latest commit

 

History

History
55 lines (44 loc) · 11.1 KB

File metadata and controls

55 lines (44 loc) · 11.1 KB

Probabilistic Perspective Machine Learning ppml License: MIT

This repository contains code to replicate, modify the codes and prove the mathematical Machine Learning concepts from

  1. probml/pmtk3
  2. Machine Learning A Probabilistic Perspective.

All the work is in the following format,

  • MATLAB implementation
  • Python implementation with proofs and explanations

This repository is intended for people who want to learn more about Probabilistic Machine Learning alongside with simplified examples, and will cover different concepts, such as Polynomial Regression, LU decomposition, Cholesky decomposition, Bayesian Inference for PolyReg, Number Game Bayes, Monte Carlo Sampling Pi, naive Bayes classifier, student distribution em algorithm, Principal Component Analysis, Independent Component Analysis, Gibbs Sampling Ranking, Message Passing Ranking, Mixture of Multinomial Model, Latent Dirichlet Allocation, GMM EM, lasso, bayes net, etc.

Other works relevant to Deep Learning include, Local Interpretable Model-Agnostic Explanations, Variational Autoencoder, Bata-VAE, Neural Statistician, Generative Adversarial Network, Deep Convolution GAN, Least-Squared GAN, Conditional-GAN (selfmade+Embedding+Dropout), InfoGAN, Auxiliary Classifier GAN (one hot encoding), CycleGAN, Adverarsial Autoencoder, Pix2Pix, DiscoGAN, Gaussian Dropout, Variational Dropout, Bayes by Backprop, etc.

Other works relevant to Robotics and Computer Vision include, Solve ODE, Kinematic Control, A* algorithm, Dijkstra Algorithm, Potential Field Path Planning, ChessBoard Calibration, 2D Homography, KLT Optical Track, 3D Homography (AR), Point Cloud, PD Control, Estimated Kalman Filter, Mobile Inverted Pendulum, LQR MIP, PID MIP, Configuration Space, Quadcopter 1D, Quadcopter 2D, Occupancy Grid Map, Particle Localization, etc.

Reinforcement Learning: Value Iteration, Policy Iteration, sarsa, q-learning

[TO DO SUMMARIZATION]: NLP, Finite State Transducer, Speech Processing, Recognition, Synthesis, Machine Translation and Attention Mechanism.

References

  • Murphy, Kevin P. 2012, Machine Learning: A Probabilistic Perspective, The MIT Press 0262018020, 9780262018029.
  • Ribeiro, Marco Tulio et al. “"Why Should I Trust You?": Explaining the Predictions of Any Classifier.” HLT-NAACL Demos (2016).
  • Burgess, Christopher P. et al. “Understanding disentangling in $\beta$-VAE.” (2018).
  • Kingma, Diederik P. and Max Welling. “Auto-Encoding Variational Bayes.” CoRR abs/1312.6114 (2013): n. pag.
  • Loic Matthey and Irina Higgins and Demis Hassabis and Alexander Lerchner. "dSprites: Disentanglement testing Sprites dataset". https://github.com/deepmind/dsprites-dataset/. (2017).

Results

LIME ICA PCA MC Pi estimation
Gaussian Blob dataset VAE GAN DCGAN
naive Bayes classifier Robotic Arm (Kinematics) Walking Robot (Kinematics) Dijkstra algorithm
A* algorithm InfoGAN CGAN LSGAN
ACGAN Value Iteration Q-learning sarsa
Potential Field Path 2D Homography KLT Optical Track 3D Homography
Point Cloud Calibration LDA True Skill Ranking
Gaussian Process Attention Mechanism
(copy not originate from me)
Finite State Automata Speech Synthesis
PolyFit with Inverse PolyFit with LU/Cholesky Factor Bayesian Inference for Polynomial Regression Solve ODE
PD Track Estimated Kalman Filter Mobile Inverted Pendulum LQR MIP
PID MIP Configuration Space Quadcopter 1D Quadcopter 2D
Occupancy Grid Map Particle Localization