Skip to content

Repository to save projects from udacity's machine learning engineer nanodegree

Notifications You must be signed in to change notification settings

matheus-asilva/machine_learning_nd

Repository files navigation

Udacity - Machine Learning Nanodegree

Udacity's Machine Learning Nanodegree project files and lecture notes.

This repository contains project files and lecture notes for Udacity's Machine Learning Engineer Nanodegree program.

Lecture note reference

Model evaluation and validation

Topics covered in this section:

  • Model Evaluation
    Confusion matrix, F1 score, F-beta score, ROC curve
  • Model selection
    Types of errors, various types of cross validation, learning curves, grid search

See lecture notes: here

Supervised learning

Topics covered in this section:

  • Linear regression
    Absolute trick, advantages / disadvantages, L1 regularisation, L2 regularisation
  • Decision trees
    Entropy, information gain, hyperparameters
  • Naive bayes
    Prior probability, posterior probability, naive bayes
  • Support vector machines
    Idea, different types of errors, basic working principle, etc.

See lecture notes: here

Unsupervised learning

Topics covered in this section:

  • Clustering
    K-means clustering
  • Hierarchical and density-based clustering
    Hierarchical clustering, single-link clustering, complete-link clustering, average-link clustering, ward's method, DB scan
  • Gaussian mixture model and cluster validation
    EM algorithm, cluster validation, external indices, internal indices, adjusted rand indices, silhouette coefficient
  • Feature scaling
  • PCA
  • Random projection and ICA
    Johnson-Lindenstrauss lemma, ICA, applications

See lecture notes: here

Reinforcement learning

Topics covered in this section:

  • RL framework
    Reinforcement setting, episodic and continuous tasks, rewards hypothesis, cumulative reward, discounted reward, Markov decision process, Bellman equations, optimality, action-value functions,
  • Dynamic programming
    Iterative policy evaluation, estimation of action values, policy improvement, policy iteration, truncated policy iteration, value iteration
  • Monte Carlo methods
    Predicting state values, estimating action-values, incremental mean, policy evaluation, policy improvement, exploration-exploitation dilemma, GLIE MC control algorithm, constant-alpha GLIE MC control algorithm
  • Temporal difference learning
    TD(0) prediction, action value estimation, solving the control problem, Sarsamax (Q-learning), expected Sarsa
  • Deep reinforcement learning
    Discrete and continuous spaces, discretization, coarse coding, tile coding, function approximation, kernel functions, coarse coding
  • Deep Q-Learning
    NNs as value functions, Monte Carlo learning, TD learning, Q-learning, Sarsa vs. Q-learning, experience replay, fixed Q-targets, different types of DQNs
  • Policy-based methods
    Policy function approximation, stochastic policy search, policy gradients, Monte Carlo policy gradients, constrained policy gradients
  • Actor-critic methods

See lecture notes: here

Deep learning

Topics covered in this section:

  • Neuronal networks
    Perceptron trick, perceptron algorithm, sigmoid activation, maximum likelihood, cross entropy, logistic regression, perceptron and gradient descent
  • Deep neural networks
    Regularization, dropout, vanishing gradients and activation function, momentum, keras optimisers
  • Convolutional neural networks
    Model validation, image augmentation

See lecture notes: here