tensorflow for deep learning
- tensorFlow
- numpy
- scipy
- scikit-learn
- matplotlib
-
- [lab01-1] linear regression
- [lab01-2] linear regression get_variable
- [lab01-3] linear regression using function
- [lab01-4] linear regression compute gradient
- [lab01-5] ridge regression
- [lab01-6] lasso regression
- [lab01-7] support vector regression
- [lab01-8] deep regression
-
- [lab02-1] classifier logistic regression
- [lab02-2] classifier softmax regression
- [lab02-3] classifier support vector regression
- [lab02-4] classifier multi class softmax regression
- [lab02-5] classifier multi class svm
-
- [lab03-1] tensorboard basic usages
- [lab03-2] tensorboard var scope
- [lab03-3] tensorboard summary
- [lab03-4] tensorboard device
- [lab03-5] tensorboard many models
-
- [lab04-1] data manipulation load csv
- [lab04-2] data manipulation npy
- [lab04-3] data manipulation train test validation
- [lab04-4] data manipulation minibatch
- [lab04-5] data manipulation tfrecord write1
- [lab04-6] data manipulation tfrecord read1
- [lab04-7] data manipulation tfrecord write2
- [lab04-8] data manipulation tfrecord read2
- [lab04-9] data manipulation queue
-
- [lab05-1] activation sigmoid
- [lab05-2] activation tanh
- [lab05-3] activation relu
- [lab05-4] activation leaky relu
- [lab05-5] activation relu with xavier
-
- [lab06-1] optimizer gradient descent
- [lab06-2] optimizer momentum
- [lab06-3] optimizer adadelta
- [lab06-4] optimizer adagrad
- [lab06-5] optimizer rmsporp
- [lab06-6] optimizer adam
-
- [lab07-1] l1 regularization
- [lab07-2] l2 regularization
- [lab07-3] dropout
- [lab07-4] batch normalization
- [lab07-5] batch normalization and dropout
-
- [lab08-0] mnist and cifar
- [lab08-1] cnn mnist base
- [lab08-2] cnn mnist learning rate decay
- [lab08-3] cnn mnist dropout
- [lab08-4] cnn mnist batch normalization
- [lab08-5] cnn mnist batch normalization and dropout
- [lab08-6] cnn mnist svm loss
- [lab08-7] cnn cifar base
- [lab08-8] cnn cifar batch normalization and dropout1
- [lab08-9] cnn cifar batch normalization and dropout2
- [lab08-X] cnn cifar batch normalization and dropout3
-
- [lab09-1] rnn sequence labeling base
- [lab09-2] rnn sequence labeling LSTM
- [lab09-3] rnn sequence labeling peephole
- [lab09-4] rnn sequence labeling gradient clipping
- [lab09-5] rnn sequence labeling gradient normalization
- [lab09-6] rnn sequence labeling dropout
- [lab09-7] rnn sequence labeling stacked LSTM
- [lab09-8] rnn sequence labeling deep LSTM
-
- [lab10-1] model save
- [lab10-2] model restore
- [lab10-3] model flags
- [lab10-4] model class
- [lab10-5] model decorator
- Danijar Hafner, https://danijar.com
- Hwalsuk Lee, https://github.com/hwalsuklee
- Sunghun Kim, https://github.com/hunkim/DeepLearningZeroToAll
- Go Deep, http://www.godeep.ml/
No | Name | Author | Years |
---|---|---|---|
1 | TensorFlow for Machine Intelligence: A Hands-On Introduction to Learning Algorithms | Sam Abrahams, Danijar Hafner, Erik Erwitt, Ariel Scarpinelli | 2016 |
2 | First Contact With Tensorflow | Jordi Torres | 2016 |
3 | Learning TensorFlow: A Guide to Building Deep Learning Systems | Tom Hope, Yehezkel S. Resheff, Itay Lieder | 2017 |
4 | Tensorflow Machine Learning Cookbook | Nick McClure | 2017 |