Go to Course https://classroom.udacity.com/nanodegrees/nd188/
Course Notebooks https://github.com/udacity/deep-learning-v2-pytorch
Duration: 2018-11-09 ~ 2019-01-09
Approximate time to complete:
(10 ~ 15 hours/week) * 8 weeks = 80 ~ 120 hours
Final Project
Download Dataset
Version 1 (Accuracy 97.07%)
Version 2 (Accuracy 99.95%)
Training Logs
(The key is not using any data augumentation for this task.)
In this course, you'll learn the basics of deep neural networks and how to build various models using PyTorch. You'll get hands-on experience building state-of-the-art deep learning models.
https://www.udacity.com/course/deep-learning-nanodegree--nd101
- Learn the concepts behind deep learning and how we train deep neural networks with backpropagation.
- Cezanne Camacho and Soumith Chintala, the creator of PyTorch, chat about the past, present, and future of PyTorch.
- Learn how to build deep neural networks with PyTorch
- Build a state-of-the-art model using a pre-trained network that classifies cat and dog images
- Here you'll learn about convolutional neural networks, powerful architectures for solving computer vision problems.
- Build and train an image classifier from scratch to classify dog breeds.
- Use a trained network to transfer the style of one image to another image
- Implement the style transfer model from Gatys et al.
- Learn how to use recurrent neural networks to learn from sequences of data such as time series
- Build a recurrent network that learns from text and generates new text one character at a time
- Build and train a recurrent network that can classify the sentiment of movie reviews
- Learn how to use PyTorch's Hybrid Frontend to convert models from Python to C++ for use in production
- Build an image classifier from scratch that will identify different species of flowers
1.1. Introduction
https://youtu.be/tn-CrUTkCUc
1.2. Classification Problems 1
https://youtu.be/Dh625piH7Z0
1.3. Classification Problems 2
https://youtu.be/46PywnGa_cQ
1.4. Linear Boundaries
https://youtu.be/X-uMlsBi07k
1.5. Higher Dimensions
https://youtu.be/eBHunImDmWw
1.6. Perceptrons
https://youtu.be/hImSxZyRiOw
1.7. Why "Neural Networks"?
https://youtu.be/zAkzOZntK6Y
1.8. Percetrons as Logical Operators
https://youtu.be/Y-ImuxNpS40
1.9. Perceptron Trick
https://youtu.be/-zhTROHtscQ
1.10. Perceptron Algorithm
https://youtu.be/p8Q3yu9YqYk
1.11. Non-Linear Regions
https://youtu.be/B8UrWnHh1Wc
1.12. Error Functions
https://youtu.be/YfUUunxWIJw
1.13. Log-loss Error Function
https://youtu.be/jfKShxGAbok
1.14. Discrete vs. Continuous
https://youtu.be/rdP-RPDFkl0
1.15. Softmax
https://youtu.be/NNoezNnAMTY
1.16. One-Hot Encoding
https://youtu.be/AePvjhyvsBo
1.17. Maximum Likelihood
https://youtu.be/1yJx-QtlvNI
1.18. Maximizing Probabilities
https://youtu.be/-xxrisIvD0E
1.19. Cross-Entropy 1
https://youtu.be/iREoPUrpXvE
1.20. Cross-Entropy 2
https://youtu.be/qvr_ego_d6w
1.21. Multi-Class Cross Entropy
https://youtu.be/keDswcqkees
1.22. Logistic Regression
https://youtu.be/V5kkHldUlVU
1.23. Gradient Descent
https://youtu.be/rhVIF-nigrY
1.24. Logistic Regression Algorithm
https://youtu.be/snxmBgi_GeU
1.25. Pre-Notebook: Gradient Descent
1.26. Notebook: Gradient Descent
Clone the repo from Github and open the notebook GradientDescent.ipynb in the intro-neural-networks > gradient-descent folder.
1.27. Perceptron vs. Gradient Descent
https://youtu.be/uL5LuRPivTA
1.28. Continuous Perceptrons
https://youtu.be/07-JJ-aGEfM
1.29. Non-Linear Data
https://youtu.be/F7ZiE8PQiSc
1.30. Non-Linear Models
https://youtu.be/HWuBKCZsCo8
1.31. Neural Network Architecture
https://youtu.be/Boy3zHVrWB4
1.32. Feedforward
https://youtu.be/hVCuvMGOfyY
1.33. Backpropagation
https://youtu.be/1SmY3TZTyUk
1.34. Pre-Notebook: Analyzing Student Data
1.35. Notebook: Analyzing Student Data
Clone the repo from Github and open the notebook StudentAdmissions.ipynb in the intro-neural-networks > student_admissions folder.
1.36. Training Optimization
https://youtu.be/UiGKhx9pUYc
1.37. Testing
https://youtu.be/EeBZpb-PSac
1.38. Overfitting and Underfitting
https://youtu.be/xj4PlXMsN-Y
1.39. Early Stopping
https://youtu.be/NnS0FJyVcDQ
1.40. Regularization
https://youtu.be/KxROxcRsHL8
1.41. Regularization 2
https://youtu.be/ndYnUrx8xvs
1.42. Dropout
https://youtu.be/Ty6K6YiGdBs
1.43. Local Minima
https://youtu.be/gF_sW_nY-xw
1.44. Random Restart
https://youtu.be/idyBBCzXiqg
1.45. Vanishing Gradient
https://youtu.be/W_JJm_5syFw
1.46. Other Activation Functions
https://youtu.be/kA-1vUt6cvQ
1.47. Batch vs. Stochastic Gradient Descent
https://youtu.be/2p58rVgqsgo
1.48. Learning Rate Decay
https://youtu.be/TwJ8aSZoh2U
1.49. Momentum
https://youtu.be/r-rYz_PEWC8
1.50. Error Functions around the World
https://youtu.be/34AAcTECu2A
2.1. Origins of PyTorch
https://youtu.be/0eLXNFv6aT8
2.2. Debugging and Desiging PyTorch
https://youtu.be/Nn8140ECzPU
2.3. From Research to Production
https://youtu.be/Nn8140ECzPU
2.4. Hybrid Frontend
https://youtu.be/J4z-P8yUZu4
2.5. Cutting-edge Applications in PyTorch
https://youtu.be/s8p6vqOubqw
2.6. User Needs and Adding Features
https://youtu.be/7HH65_c7Acw
2.7. PyTorch and the Facebook Product
https://youtu.be/TjVveb0iVrA
2.8. The Future of PyTorch
https://youtu.be/vfCg3FoOjE4
2.9. Learning More in AI
https://youtu.be/NMItGw0GFGM
3.1. Welcome
Navigate to the intro-to-pytorch directory in the repo
3.2. Single layer neural networks
https://youtu.be/6Z7WntXays8
3.3. Single layer neural networks solution
https://youtu.be/mNJ8CujTtpo
3.4. Networks Using Matrix Multiplication
https://youtu.be/QLaGMz8Ca3E
3.5. Multilayer Networks Solution
https://youtu.be/iMIo9p5iSbE
3.6. Neural Networks in PyTorch
https://youtu.be/CSQOdOb2mlg
3.7. Neural Networks Solution
https://youtu.be/zym36ihtOMY
3.8. Implementing Softmax Solution
https://youtu.be/8KRX7HvqfP0
3.9. Network Architectures in PyTorch
https://youtu.be/9ILiZwbi9dA
3.10. Network Architectures Solution
https://youtu.be/zBWlOeX2sQM
3.11. Training a Network Solution
https://youtu.be/ExyFG2MjsKs
3.12. Classifying Fashion-MNIST
https://youtu.be/AEJV_RKZ7VU
3.13. Fashion-MNIST Solution
https://youtu.be/R6Y4hPLVQWM
3.14. Inference and Validation
https://youtu.be/XACXlkIdS7Y
3.15. Validation Solution
https://youtu.be/AjrXltxqsK4
3.16. Dropout Solution
https://youtu.be/3Py2SbtZLbc
3.17. Saving and Loading Models
https://youtu.be/psmrPu-mseA
3.18. Loading Image Data
https://youtu.be/hFu7GTfRWks
3.19. Loading Image Data Solution
https://youtu.be/d_NhvI1yEf0
3.20. Transfer Learning
https://youtu.be/S9F7MtJ5jls
3.21. Transfer Learning Solution
https://youtu.be/4n6T93hKRD4
3.22. Tips, Tricks, and Other Notes
PyTorch can only perform operations on tensors that are on the same device, so either both CPU or both GPU.
4.1. Introducing Alexis
https://youtu.be/38ExGpdyvJI
4.2. Applications of CNNs
https://youtu.be/HrYNL_1SV2Y
4.3. Lesson Outline
https://youtu.be/77LzWE1qQrc
4.4. MNIST Data
https://youtu.be/a7bvIGZpcnk
4.5. How Computers Interpret Images
https://youtu.be/mEPfoM68Fx4
4.6. MLP Structure & Class Scores
https://youtu.be/fP0Odiai8sk
4.7. Do Your Research
https://youtu.be/CR4JeAn1fgk
4.8. Loss & Optimization
https://youtu.be/BmPDtSXv18w
4.9. Training the Network
https://youtu.be/904bfqibcCw
4.10. Training the Network
https://youtu.be/904bfqibcCw
4.11. Pre-Notebook: MLP Classisfication, MNIST
4.12. Notebook: MLP Classification, MNIST
Clone the repo from Github and open the notebook mnist_mlp_exercise.ipynb in the convolutional-neural-networks > mnist-mlp folder.
4.13. One Solution
https://youtu.be/7q37WPjQhDA
4.14. Model Validation
https://youtu.be/b5934VsV3SA
4.15. Validation Loss
https://youtu.be/uGPP_-pbBsc
4.16. Image Classification Steps
https://youtu.be/UHFBnitKraA
4.17. MLPs vs CNNs
https://youtu.be/Q7CR3cCOtJQ
4.18. Local Connectivity
https://youtu.be/z9wiDg0w-Dc
4.19. Filters and the Convolutional Layer
https://youtu.be/x_dhnhUzFNo
4.20. Filters & Edges
https://youtu.be/hfqNqcEU6uI
4.21. Frequency in Images
Similarly, frequency in images is a rate of change. Images change in space, and a high frequency image is one where the intensity changes a lot.
4.22. High-pass Filters
https://youtu.be/OpcFn_H2V-Q
4.23. Quiz: Kernels
Of the four kernels pictured above, which would be best for finding and enhancing horizontal edges and lines in an image?
4.24. OpenCV & Creating Custom Filters
4.25. Notebook: Finding Edges
Clone the repo from Github and open the notebook custom_filters.ipynb in the convolutional-neural-networks > conv-visualization folder.
4.26 Convolutional Layer
In practice, you'll also find that many neural networks learn to detect the edges of images because the edges of object contain valuable information about the shape of an object.
4.27. Convolutional Layers (Part 2)
https://youtu.be/RnM1D-XI--8
4.28. Stride and Padding
https://youtu.be/GmStpNi8jBI
4.29. Pooling Layers
https://youtu.be/_Ok5xZwOtrk
4.30. Notebook: Layer Visualization
4.31. Increasing Depth
https://youtu.be/YKif1KNpWeE
4.32. CNNs for Image Classification
https://youtu.be/smaw5GqRaoY
4.33. Convolutional Layers in PyTorch
4.34. Feature Vector
https://youtu.be/g6QuiVno8zI
4.35. CIFAR Classification Example
https://youtu.be/FF_EmZ2sf2w
4.36. Notebook: CNN Classification
To open this notebook, go to your local repo (found here on Github) and open the notebook cifar10_cnn_exercise.ipynb in the convolutional-neural-networks > cifar-cnn folder.
4.37. CNNs in PyTorch
https://youtu.be/GNxzWfiz3do
4.38. Image Augmentation
https://youtu.be/zQnx2jZmjTA
4.39. Augmentation Using Transformations
https://youtu.be/J_gjHVt9pVw
4.40. Groundbreaking CNN Architectures
https://youtu.be/GdYOqihgb2k
4.41. Visualizing CNNs (Part 1)
https://youtu.be/mnqS_EhEZVg
4.42. Visualizing CNNs (Part 2)
The CNN we will look at is trained on ImageNet as described in this paper by Zeiler and Fergus.
4.43. Summary of CNNs
https://youtu.be/Te9QCvhx6N8
5.1. Style Transfer
https://youtu.be/_urN9BQ7RHM
5.2. Separating Style & Content
https://youtu.be/PNFFAhymuHc
5.3. VGG19 & Content Loss
https://youtu.be/PQ1UuzOIjCM
5.4. Gram Matrix
https://youtu.be/e718uVAW3KU
5.5. Style Loss
https://youtu.be/VazrQ7u-OHo
5.6. Loss Weights
https://youtu.be/qO8oiZBtG1I
5.7. VGG Features
https://youtu.be/Q5N2NEv7ADc
5.8. Notebook: Style Transfer
To open this notebook, go your local repo (from here on Github) and open the notebook Style_Transfer_Exercise.ipynb in the style-transfer folder.
5.9. Features & Gram Matrix
https://youtu.be/f89x9oAh6X0
5.10. Gram Matrix Solution
https://youtu.be/uncCKMI5Yns
5.11. Defining the Loss
https://youtu.be/lix8d3B2QcE
5.12. Total Loss & Complete Solution
https://youtu.be/DzaQm9awcwY
6.1. Intro to RNNs
Chris Olah's LSTM post
http://colah.github.io/posts/2015-08-Understanding-LSTMs/
Edwin Chen's LSTM post
http://blog.echen.me/2017/05/30/exploring-lstms/
Andrej Karpathy's blog post on RNNs
http://karpathy.github.io/2015/05/21/rnn-effectiveness/
Andrej Karpathy's lecture on RNNs and LSTMs from CS231n
https://www.youtube.com/watch?v=iX5V1WpxxkY
6.2. RNN vs. LSTM
https://youtu.be/70MgF-IwAr8
6.3. Basics of LSTM
https://youtu.be/gjb68a4XsqE
6.4. Architecture of LSTM
https://youtu.be/ycwthhdx8ws
6.5. The Learn Gate
https://youtu.be/aVHVI7ovbHY
6.6. The Forget Gate
https://youtu.be/iWxpfxLUPSU
6.7. The Remember Gate
https://youtu.be/0qlm86HaXuU
6.8. The Use Gate
https://youtu.be/5Ifolm1jTdY
6.9. Putting it All Together
https://youtu.be/IF8FlKW-Zo0
6.10. Other architectures
https://youtu.be/MsxFDuYlTuQ
6.11. Implementing RNNs
First, I'll show you how to learn from time-series data. Then, you'll implement a character-level RNN.
6.12. Time-Series Prediction
The below video is a walkthrough of code that you can find in our public Github repository, if you navigate to recurrent-neural-networks > time-series and the Simple_RNN.ipynb notebook.
https://youtu.be/xV5jHLFfJbQ
6.13. Training & Memory
https://youtu.be/sx7T_KP5v9I
6.14. Character-wise RNNs
https://youtu.be/dXl3eWCGLdU
6.15. Sequence Batching
https://youtu.be/Z4OiyU0Cldg
6.16. Notebook: Character-Level RNN
To open this notebook, go to our notebook repo (available from here on Github) and open the notebook Character_Level_RNN_Exercise.ipynb in the recurrent-neural-networks > char-rnn folder.
6.17. Implementing a Char-RNN
https://youtu.be/MMtgZXzFB10
6.18. Batching Data, Solution
https://youtu.be/9Eg0wf3eW-k
6.19. Defining the Model
https://youtu.be/_LWzyqq4hCY
6.20. Char-RNN, Solution
https://youtu.be/ed33qePHrJM
6.21. Making Predictions
https://youtu.be/BhrpV3kwATo
7.1. Sentiment Analysis RNNs
We'll be training the model on a dataset of movie reviews from IMDB that have been labeled either "positive" or "negative".
7.2. Notebook: Sentiment RNN
To open this notebook, go to our notebook repo (available from here on Github) and open the notebook Sentiment_RNN_Exercise.ipynb in the sentiment-rnn folder.
7.3. Data Pre-Processing
https://youtu.be/Xw1MWmql7no
7.4. Encoding Words, Solution
https://youtu.be/4RYyn3zv1Hg
7.5. Getting Rid of Zero-Length
https://youtu.be/Hs6ithuvDJg
7.6. Cleaning & Padding Data
https://youtu.be/UgPo1_cq-0g
7.7. Padded Features, Solution
https://youtu.be/sYOd1IDmep8
7.8. TensorDataset & Batching Data
https://youtu.be/Oxuf2QIPjj4
7.9. Defining the Model
https://youtu.be/SpvIZl1YQRI
7.10. Complete Sentiment RNN
https://github.com/udacity/deep-learning-v2-pytorch/blob/master/sentiment-rnn/Sentiment_RNN_Solution.ipynb
7.11. Traning the Model
After defining my model, next I should instantiate it with some hyperparameters.
7.12. Testing
I want to show you two great ways to test: using test data and using inference.
7.13. Inference, Solution
One of the coolest ways to test a model like this is to give it user-generated data, without any true label, and see what happens.
8.1. Welcome!
Welcome to this lesson on using PyTorch in production. PyTorch has been most popular in research settings due to its flexibility, expressiveness, and ease of development in general. However, adoption has been slow in industry because it wasn't as useful in production environments which typically require models to run in C++. To address this, PyTorch 1.0 has introduced new features for exporting your models from Python into C++.
https://pytorch.org/tutorials/advanced/cpp_export.html
8.2. Installing PyTorch 1.0
https://youtu.be/kIwKPxgReFY
8.3. PyTorch for Production
https://youtu.be/DBSoZWd4lQo
8.4. Torch Script & Tracing
https://youtu.be/lYmQDUprQa0
8.5. Annotations
https://youtu.be/pO1RM7mKaFg
8.6. PyTorch C++ API
https://youtu.be/P1S1dN1gHmw
8.7. Want to learn more?
Deep Learning Nanodegree Program
https://www.udacity.com/course/deep-learning-nanodegree--nd101
9.1. Final Project
As the final part of your scholarship challenge, you'll be completing a project to test what you've learned. Here, you'll build an image classifier from scratch that will identify different species of flowers.
Download Dataset https://s3.amazonaws.com/content.udacity-data.com/courses/nd188/flower_data.zip
4.20 Style Transfer - Chinese Monkey King × Der Blaue Reiter
4.20 Style Transfer - Octopus × David Hockney
7.16 Character-Level LSTM Generating A Chapter of "Anna Karenina"
And Levin said that he would not be as the full subject
was bought that he would have to see himself about the servant or to be so
grouse.
"You've been bagreed about,
I shall sat down, to despive me about me, that's all so time in your confidence in a condition to the
company, all of the clear staircess.
Thinking
of that, the sick man when she could
how interest them in the princess, but I would come about her, and he had not been to
speak, and I can't be
then to be a minute. You will be in a step of the marshal, and so it
simply may anyone."
"Oh, they, you're going to settle on your fashionable
military face," said Stepan Arkadyevitch with
an anchot attitude to her.
...