-
Couldn't load subscription status.
- Fork 0
Lecture 8
AsyDynamics edited this page May 29, 2018
·
3 revisions
- optimization: SGD+momentum, Nesterov, RMSProp, Adam
- Regularization: Dropout
- other recall: weight, loss
balabala
- Numpy, problem: can't run on GPU; have to compute own gradients
- Tensorflow, could call function to calculate gradient; could determine run on gpu or cpu
- pytorch
- define computational graph
- create placeholder
- forward pass, compute prediction and loss, i.e., L2 distance between y and y_predicted
- calculate gradients
- after building graph, enter a session to run the graph
- create numpy arrays to fill the place holders above
- run the graph, get loss
- train the network
- change weights from place holder to variables
- assign and update weighs using learning rate*gradients
- use optimizer, predefined loss, initializer
- define model object as a sequence of layers
- define optimizer
- build the model, specify the loss function
- train the model