Skip to content

Commit

Permalink
Merge branch 'master' of github.com:NLeSC/mcfly-tutorial
Browse files Browse the repository at this point in the history
  • Loading branch information
dafnevk committed Mar 8, 2017
2 parents 2602cfd + 3739b01 commit 5888c5c
Showing 1 changed file with 8 additions and 6 deletions.
14 changes: 8 additions & 6 deletions tutorial/cheatsheet.md
Expand Up @@ -5,16 +5,18 @@ Detailed documentation can be found in the mcfly [wiki](https://github.com/NLeSC
Notebook tutorials can be found in the mcfly-tutorial [repository](https://github.com/NLeSC/mcfly-tutorial)

### Jargon terms
* [**accuracy**](https://en.wikipedia.org/wiki/Evaluation_of_binary_classifiers): proportion of correctly classified items on all items
* [**convolutional layer**](http://ufldl.stanford.edu/tutorial/supervised/FeatureExtractionUsingConvolution/): type of layer where a convolutional filter is slided over the input data
* **epoch**: One full pass through the dataset in the training algorithm
* **CNN**: Convolutional Neural Network, a deep learning network that consists mostly of convolutional layers.
* [**accuracy**](https://en.wikipedia.org/wiki/Evaluation_of_binary_classifiers): proportion of correctly classified samples on all samples in a dataset
* **convolutional filter**: a set of weights that are applied to neighbouring data points
* [**convolutional layer**](http://ufldl.stanford.edu/tutorial/supervised/FeatureExtractionUsingConvolution/): type of network layer where a convolutional filter is slided over
* **CNN**: Convolutional Neural Network, a deep learning network that includes convolutional layers, often combined with dense or fully connected layers.
* **DeepConvLSTM**: A deep learning network that consists of convolutional layers and LSTM layers
* [**gradient descent**](http://cs231n.github.io/optimization-1/): the algorithm that is used to learn the weights from training data. In each step of the gradient descent algorithm, the weights are adjusted with a step in the direction of the gradient ('slope') .
* **hyperparameters**: In mcfly, the hyperparameters are the architectural choices of the model (number of layers, lstm or convolutional layers, etc) and the learning rate and regulization rate.
* **epoch**: One full pass through a dataset (all datapoints are seen once) in the process of training the weights of a network.
* [**gradient descent**](http://cs231n.github.io/optimization-1/): Algorithm used to find the optimal weights for the nodes in the network. The algorithm looks for the weights corresponding to a minimum classification loss. The search space can be interpreted as a landscape where the lowest point is the optimum, hence the term 'descent'. In each step of the gradient descent algorithm, the weights are adjusted with a step in the direction of the gradient ('slope') .
* **hyperparameters**: In mcfly, the hyperparameters are the architectural choices of the model (number of layers, lstm or convolutional layers, etc) and the learning rate and regulization rate.
* **layer**: A deep learning network consists of multiple layers. The more layers, the deeper your network.
* **learning rate**: the step size to take in the gradient descent algorithm
* [**LSTM layer**](http://colah.github.io/posts/2015-08-Understanding-LSTMs/): Long Term Short Memory layer. This is a special type of Recurrent layer, that takes a sequence as input and outputs a sequence.
* **Loss**: An indicator of classification error. In mcfly we use [categorical cross entropy](http://cs231n.github.io/linear-classify/#softmax)
* **regularization rate**: how strongly the [L2 regularization](http://cs231n.github.io/neural-networks-2/#reg) is applied to avoid overfitting on train data.
* **[validation set](https://en.wikipedia.org/wiki/Test_set#Validation_set)**: Part of the data that is kept apart to evaluate the performance of your model and choose hyper parameters

Expand Down

0 comments on commit 5888c5c

Please sign in to comment.