Layer | Description |
---|---|
Elu |
Exponential linear unit |
Identity |
Output the input tensor |
LeakyRelu |
Leaky relu |
LogSoftmax |
Logarithm of softmax function |
Relu |
Rectified linear unit |
Softmax |
Softmax |
The Elu
layer is similar to Relu
but with negative values that cause the mean of the Elu
activation function to shift toward 0.
α should be non-negative. See:
Djork-Arne Clevert, Thomas Unterthiner, and Sepp Hochreiter. "Fast and accurate deep network learning by exponential linear units (ELUs)." arXiv preprint arXiv:1511.07289 (2015).
Arguments:
- alpha
(
double
, optional) Default: 1. Should be >=0
Back to Top<activation-layers>
The Identity
layer outputs the input tensor.
This layer is very cheap since it just involves setting up tensor views.
Arguments: None
Back to Top<activation-layers>
LeakyRelu
modifies theRelu
function to allow fora small, non-zero gradient when the unit is saturated and not active.
See:
Andrew L. Maas, Awni Y. Hannun, and Andrew Y. Ng. "Rectifier nonlinearities improve neural network acoustic models." In Proc. ICML, vol. 30, no. 1, p. 3. 2013.
Arguments:
- negative_slope
(
double
, optional) Default: 0.01
Back to Top<activation-layers>
LogSoftmax
is the logarithm of the softmax function.
log softmax(x)i = xi − log ∑jexj
Arguments: None
Back to Top<activation-layers>
The Relu
layer outputs input directly if positive, otherwise outputs zero.
ReLU(x) = max(x, 0)
Arguments: None
Back to Top<activation-layers>
The Softmax
layer turns a vector of K real values into a vector of K real values that sum to 1.
Arguments:
- softmax_mode
(
string
, optional) Options: instance (default), channel
Back to Top<activation-layers>