My continuously updated Machine Learning, Probabilistic Models and Deep Learning notes and demos (1000+ slides) 我不间断更新的机器学习,概率模型和深度学习的讲义(1000+页)和视频链接
Switch branches/tags
Nothing to show
Clone or download
Latest commit 56927f6 Sep 11, 2018
Permalink
Failed to load latest commit information.
.ipynb_checkpoints minor change on jupyter Mar 11, 2018
files Create index.txt Mar 23, 2018
30_min_AI.pptx Add files via upload Jun 15, 2018
AI_and_machine_learning.pdf Add files via upload Mar 11, 2018
DeeCamp2018_Xu_final.pptx Add files via upload Jul 24, 2018
README.md Update README.md Sep 12, 2018
bayesian.pdf machine learning notes Feb 18, 2018
cnn_beyond.pdf Add files via upload Mar 22, 2018
data_analytics.pdf machine learning notes Feb 18, 2018
deeCamp.pdf Add files via upload Jul 24, 2018
deep_nlp.pdf Add files via upload Jun 13, 2018
dimension_reduction.pdf machine learning notes Feb 18, 2018
dpp.pdf machine learning notes Feb 18, 2018
dqn.pdf Add files via upload Aug 9, 2018
dynamic_model.pdf machine learning notes Feb 18, 2018
em.pdf machine learning notes Feb 18, 2018
industry_master_class.ipynb minor change on jupyter Mar 11, 2018
introduction_monte_carlo.pdf machine learning notes Feb 18, 2018
markov_chain_monte_carlo.pdf machine learning notes Feb 18, 2018
neural_networks.pdf machine learning notes Feb 18, 2018
non_parametrics.pdf Add files via upload Jun 8, 2018
non_parametrics_extensions.pdf Add files via upload May 19, 2018
optimization.pdf machine learning notes Feb 18, 2018
particle_filter.pdf machine learning notes Feb 18, 2018
probability.pdf machine learning notes Feb 18, 2018
rbm_gan.pdf Add files via upload Jun 4, 2018
recommendation.pdf Add files via upload Jun 27, 2018
regression.pdf Add files via upload Jul 11, 2018
selected_probability.pdf Add files via upload Sep 11, 2018
statistics.pdf machine learning notes Feb 18, 2018
stochastic_matrices.pdf machine learning notes Feb 18, 2018
variational.pdf machine learning notes Feb 18, 2018
word_vector.pdf Add files via upload Jun 11, 2018

README.md

Probabilities and Deep Learning

Topics include: Expectation-Maximization & Matrix Capsule Networks; Determinantal Point Process & Neural Networks compression; Kalman Filter & LSTM; Model estimation & Binary classifier

Detailed illustration of Noise Contrastive Estimation (detals & derivations), Probability Density Re-parameterization, Natural Gradients

Video Tutorial to these notes.

  • I recorded about 20% of these notes in videos in 2015 in Mandarin (all my notes and writings are in English) You may find them on Youtube and 哔哩哔哩 and 优酷

  • I always look for high quality PhD students in Machine Learning, both in terms of probabilistic model and Deep Learning models. Contact me on YiDa.Xu@uts.edu.au

Data Science

An extremely gentle 30 minutes introduction to AI and Machine Learning. Thanks to my PhD student Haodong Chang for assist editing

Classification: Logistic and Softmax; Regression: Linear, polynomial; Mix Effect model [costFunction.m] and [soft_max.m]

collaborative filtering, Factorization Machines, Non-Negative Matrix factorisation, Multiplicative Update Rule

classic PCA and t-SNE

Three perspectives into machine learning and Data Science. Supervised vs Unsupervised Learning, Classification accuracy

Deep Learning (jupyter style notes coming in 2018)

Optimisation methods in general. not limited to just Deep Learning

basic neural networks and multilayer perceptron

detailed explanation of CNN, various Loss function, Centre Loss, contrastive Loss, Residual Networks, YOLO, SSD

Word2Vec, skip-gram, GloVe, Noise Contrastive Estimation, Negative sampling, Gumbel-max trick

RNN, LSTM, Seq2Seq with Attenion, Beam search, Attention is all you need, Convolution Seq2Seq, Pointer Networks

basic knowledge in reinforcement learning, Markov Decision Process, Bellman Equation and move onto Deep Q-Learning (under construction)

basic knowledge in Restricted Boltzmann Machine (RBM)

Probability and Statistics Background

revision on Bayes model include Bayesian predictive model, conditional expectation

some useful distributions, conjugacy, MLE, MAP, Exponential family and natural parameters

useful statistical properties to help us prove things, include Chebyshev and Markov inequality

Probabilistic Model

Proof of convergence for E-M, examples of E-M through Gaussian Mixture Model, [gmm_demo.m] and [kmeans_demo.m] and [优酷链接]

explain in detail of Kalman Filter and Hidden Markov Model, [kalman_demo.m] and [HMM 优酷链接] and [Kalman Filter 优酷链接]

Inference

explain Variational Bayes both the non-exponential and exponential family distribution plus stochastic variational inference. [vb_normal_gamma.m] and [优酷链接]

stochastic matrix, Power Method Convergence Theorem, detailed balance and PageRank algorithm

inverse CDF, rejection, adaptive rejection, importance sampling [adaptive_rejection_sampling.m] and [hybrid_gmm.m]

M-H, Gibbs, Slice Sampling, Elliptical Slice sampling, Swendesen-Wang, demonstrate collapsed Gibbs using LDA [lda_gibbs_example.m] and [test_autocorrelation.m] and [gibbs.m] and [优酷链接]

Sequential Monte-Carlo, Condensational Filter algorithm, Auxiliary Particle Filter [优酷链接]

Advanced Probabilistic Model

Dircihlet Process (DP), Chinese Restaurant Process insights, Slice sampling for DP [dirichlet_process.m] and [优酷链接] and [Jupyter Notebook]

Hierarchical DP, HDP-HMM, Indian Buffet Process (IBP)

explain the details of DPP’s marginal distribution, L-ensemble, its sampling strategy, our work in time-varying DPP