tensorlayer.layers
Layer
Input
OneHot Word2vecEmbedding Embedding AverageEmbedding
Dense Dropout GaussianNoise DropconnectDense
UpSampling2d DownSampling2d
Conv1d Conv2d Conv3d DeConv2d DeConv3d DepthwiseConv2d SeparableConv1d SeparableConv2d DeformableConv2d GroupConv2d
PadLayer PoolLayer ZeroPad1d ZeroPad2d ZeroPad3d MaxPool1d MeanPool1d MaxPool2d MeanPool2d MaxPool3d MeanPool3d GlobalMaxPool1d GlobalMeanPool1d GlobalMaxPool2d GlobalMeanPool2d GlobalMaxPool3d GlobalMeanPool3d CornerPool2d
SubpixelConv1d SubpixelConv2d
SpatialTransformer2dAffine transformer batch_transformer
BatchNorm BatchNorm1d BatchNorm2d BatchNorm3d LocalResponseNorm InstanceNorm InstanceNorm1d InstanceNorm2d InstanceNorm3d LayerNorm GroupNorm SwitchNorm
RNN SimpleRNN GRURNN LSTMRNN BiRNN
retrieve_seq_length_op retrieve_seq_length_op2 retrieve_seq_length_op3 target_mask_op
Flatten Reshape Transpose Shuffle
Lambda
Concat Elementwise ElementwiseLambda
ExpandDims Tile
Stack UnStack
Sign Scale BinaryDense BinaryConv2d TernaryDense TernaryConv2d DorefaDense DorefaConv2d
PRelu PRelu6 PTRelu6
flatten_reshape initialize_rnn_state list_remove_repeat
Layer
Input
OneHot
Word2vecEmbedding
Embedding
AverageEmbedding
PRelu
PRelu6
PTRelu6
Conv1d
Conv2d
Conv3d
DeConv2d
DeConv3d
DeformableConv2d
DepthwiseConv2d
GroupConv2d
SeparableConv1d
SeparableConv2d
SubpixelConv1d
SubpixelConv2d
Dense
DropconnectDense
Dropout
ExpandDims
Tile
UpSampling2d
DownSampling2d
Lambda
ElementwiseLambda
Concat
Elementwise
GaussianNoise
BatchNorm
BatchNorm1d
BatchNorm2d
BatchNorm3d
LocalResponseNorm
InstanceNorm
InstanceNorm1d
InstanceNorm2d
InstanceNorm3d
LayerNorm
GroupNorm
SwitchNorm
Padding layer for any modes.
PadLayer
ZeroPad1d
ZeroPad2d
ZeroPad3d
Pooling layer for any dimensions and any pooling functions.
PoolLayer
MaxPool1d
MeanPool1d
MaxPool2d
MeanPool2d
MaxPool3d
MeanPool3d
GlobalMaxPool1d
GlobalMeanPool1d
GlobalMaxPool2d
GlobalMeanPool2d
GlobalMaxPool3d
GlobalMeanPool3d
CornerPool2d
This is an experimental API package for building Quantized Neural Networks. We are using matrix multiplication rather than add-minus and bit-count operation at the moment. Therefore, these APIs would not speed up the inferencing, for production, you can train model via TensorLayer and deploy the model into other customized C/C++ implementation (We probably provide users an extra C/C++ binary net framework that can load model from TensorLayer).
Note that, these experimental APIs can be changed in the future.
Sign
Scale
BinaryDense
BinaryConv2d
TernaryDense
TernaryConv2d
DorefaConv2d
DorefaConv2d
All recurrent layers can implement any type of RNN cell by feeding different cell function (LSTM, GRU etc).
RNN
SimpleRNN
GRURNN
LSTMRNN
BiRNN
These operations usually be used inside Dynamic RNN layer, they can compute the sequence lengths for different situation and get the last RNN outputs by indexing.
retrieve_seq_length_op
retrieve_seq_length_op2
retrieve_seq_length_op3
target_mask_op
Flatten
Reshape
Transpose
Shuffle
SpatialTransformer2dAffine
transformer
batch_transformer
Stack
UnStack
flatten_reshape
initialize_rnn_state
list_remove_repeat