Skip to content

Bản dịch cuốn sách "Dive into Deep Learning"

License

Unknown and 2 other licenses found

Licenses found

Unknown
LICENSE
MIT-0
LICENSE-SAMPLECODE
Unknown
LICENSE-SUMMARY
Notifications You must be signed in to change notification settings

thunguyenuehk39/d2l-vn

 
 

Repository files navigation

Dự án dịch sách "Dive into Deep Learning"

Cuốn sách này được dịch và đăng tại https://d2l.aivivn.com/.

Hướng dẫn đóng góp vào dự án

Thứ tự dịch

Với các mục con (2.1, 2.2, ...)

  • Đã dịch xong
  • [-] Đang dịch
  • Chưa bắt đầu

Với các chương (2., 3., ...)

  • Chưa revise
  • [-] Đang revise
  • Đã revise xong.

Mục lục

  • Lời nói đầu
  • Cài đặt
  • Ký hiệu
  • Giới thiệu
  • 2. Preliminaries
    • [-] 2.1. Thao tác với Dữ liệu
    • [-] 2.2. Tiền Xử lý Dữ liệu
    • 2.3. Đại số Tuyến tính
    • [-] 2.4. Giải tích
    • [-] 2.5. Tính vi phân Tự động
    • [-] 2.6. Probability
    • 2.7. Documentation
  • 3. Linear Neural Networks
    • 3.1. Linear Regression
    • 3.2. Linear Regression Implementation from Scratch
    • 3.3. Concise Implementation of Linear Regression
    • 3.4. Softmax Regression
    • 3.5. The Image Classification Dataset (Fashion-MNIST)
    • 3.6. Implementation of Softmax Regression from Scratch
    • 3.7. Concise Implementation of Softmax Regression
  • 4. Multilayer Perceptrons
    • 4.1. Multilayer Perceptrons
    • 4.2. Implementation of Multilayer Perceptron from Scratch
    • 4.4. Concise Implementation of Multilayer Perceptron
    • 4.5. Model Selection, Underfitting and Overfitting
    • 4.5. Weight Decay
    • 4.6. Dropout
    • 4.7. Forward Propagation, Backward Propagation, and Computational Graphs
    • 4.8. Numerical Stability and Initialization
    • 4.9. Considering the Environment
    • 4.10. Predicting House Prices on Kaggle
  • 5. Deep Learning Computation
    • 5.1. Layers and Blocks
    • 5.2. Parameter Management
    • 5.3. Deferred Initialization
    • 5.4. Custom Layers
    • 5.5. File I/O
    • 5.6. GPUs
  • 6. Convolutional Neural Networks
    • 6.1. From Dense Layers to Convolutions
    • 6.2. Convolutions for Images
    • 6.3. Padding and Stride
    • 6.4. Multiple Input and Output Channels
    • 6.5. Pooling
    • 6.6. Convolutional Neural Networks (LeNet)
  • 7. Modern Convolutional Networks
    • 7.1. Deep Convolutional Neural Networks (AlexNet)
    • 7.2. Networks Using Blocks (VGG)
    • 7.3. Network in Network (NiN)
    • 7.4. Networks with Parallel Concatenations (GoogLeNet)
    • 7.5. Batch Normalization
    • 7.6. Residual Networks (ResNet)
    • 7.7. Densely Connected Networks (DenseNet)
  • 8. Recurrent Neural Networks
    • 8.1. Sequence Models
    • 8.2. Text Preprocessing
    • 8.3. Language Models and the Dataset
    • 8.4. Recurrent Neural Networks
    • 8.5. Implementation of Recurrent Neural Networks from Scratch
    • 8.6. Concise Implementation of Recurrent Neural Networks
    • 8.7. Backpropagation Through Time
  • 9. Modern Recurrent Networks
    • 9.1. Gated Recurrent Units (GRU)
    • 9.2. Long Short Term Memory (LSTM)
    • 9.3. Deep Recurrent Neural Networks
    • 9.4. Bidirectional Recurrent Neural Networks
    • 9.5. Machine Translation and the Dataset
    • 9.6. Encoder-Decoder Architecture
    • 9.7. Sequence to Sequence
    • 9.8. Beam Search
  • 10. Attention Mechanisms
    • 10.1. Attention Mechanisms
    • 10.2. Sequence to Sequence with Attention Mechanisms
    • 10.3. Transformer
  • 11. Optimization Algorithms
    • 11.1. Optimization and Deep Learning
    • 11.2. Convexity
    • 11.3. Gradient Descent
    • 11.4. Stochastic Gradient Descent
    • 11.5. Minibatch Stochastic Gradient Descent
    • 11.6. Momentum
    • 11.6. Adagrad
    • 11.8. RMSProp
    • 11.9. Adadelta
    • 11.10. Adam
    • 11.11. Learning Rate Scheduling
  • 12. Computational Performance
    • [-] 12.1. Compilers and Interpreters
    • 12.2. Asynchronous Computation
    • 12.3. Automatic Parallelism
    • 12.4. Hardware
    • 12.5. Training on Multiple GPUs
    • 12.6. Concise Implementation for Multiple GPUs
    • 12.6. Parameter Servers
  • 13. Computer Vision
    • 13.1. Image Augmentation
    • 13.2. Fine Tuning
    • 13.3. Object Detection and Bounding Boxes
    • 13.4. Anchor Boxes
    • 13.5. Multiscale Object Detection
    • 13.6. The Object Detection Dataset (Pikachu)
    • 13.7. Single Shot Multibox Detection (SSD)
    • 13.8. Region-based CNNs (R-CNNs)
    • 13.9. Semantic Segmentation and the Dataset
    • 13.10. Transposed Convolution
    • 13.11. Fully Convolutional Networks (FCN)
    • 13.12. Neural Style Transfer
    • 13.13. Image Classification (CIFAR-10) on Kaggle
    • 13.14. Dog Breed Identification (ImageNet Dogs) on Kaggle
  • 14. Natural Language Processing
    • 14.1. Word Embedding (word2vec)
    • 14.2. Approximate Training for Word2vec
    • 14.3. The Dataset for Word2vec
    • 14.4. Implementation of Word2vec
    • 14.5. Subword Embedding
    • 14.6. Word Embedding with Global Vectors (GloVe)
    • 14.7. Finding Synonyms and Analogies
    • 14.8. Sentiment Analysis and the Dataset
    • 14.9. Sentiment Analysis: Using Recurrent Neural Networks
    • 14.10. Sentiment Analysis: Using Convolutional Neural Networks
    • 14.11. Natural Language Inference and the Dataset
  • 15. Recommender Systems
    • 15.1. Overview of Recommender Systems
    • 15.2. The MovieLens Dataset
    • 15.3. Matrix Factorization
    • 15.4. AutoRec: Rating Prediction with Autoencoders
    • 15.5. Personalized Ranking for Recommender Systems
    • 15.6. Neural Collaborative Filtering for Personalized Ranking
    • 15.7. Sequence-Aware Recommender Systems
    • 15.8. Feature-Rich Recommender Systems
    • 15.9. Factorization Machines
    • 15.10. Deep Factorization Machines
  • 16. Generative Adversarial Networks
    • 16.1. Generative Adversarial Networks
    • 16.2. Deep Convolutional Generative Adversarial Networks
  • 17. Appendix: Mathematics for Deep Learning
    • 17.1. Các phép toán Hình Học và Đại Số Tuyến Tính
    • 17.2. Eigendecompositions
    • 17.3. Giải tích một biến
    • 17.4. Multivariable Calculus
    • 17.5. Integral Calculus
    • 17.6. Random Variables
    • 17.7. Maximum Likelihood
    • 17.8. Naive Bayes
    • 17.9. Thống kê
    • 17.10. Information Theory
  • 18. Appendix: Tools for Deep Learning
    • 18.1. Using Jupyter
    • 18.2. Using Amazon SageMaker
    • 18.3. Using AWS EC2 Instances
    • 18.4. Using Google Colab
    • 18.5. Selecting Servers and GPUs
    • 18.6. Contributing to This Book
    • 18.7. d2l API Document

About

Bản dịch cuốn sách "Dive into Deep Learning"

Resources

License

Unknown and 2 other licenses found

Licenses found

Unknown
LICENSE
MIT-0
LICENSE-SAMPLECODE
Unknown
LICENSE-SUMMARY

Stars

Watchers

Forks

Releases

No releases published

Packages

No packages published

Languages

  • Python 70.9%
  • TeX 20.5%
  • HTML 4.6%
  • Shell 2.1%
  • Makefile 1.4%
  • Dockerfile 0.3%
  • Groovy 0.2%