----------------------------------------------------------------
Layer (type) Output Shape Param #
================================================================
Conv2d-1 [-1, 32, 24, 24] 832
MaxPool2d-2 [-1, 32, 12, 12] 0
ReLU-3 [-1, 32, 12, 12] 0
Conv2d-4 [-1, 64, 10, 10] 18,496
MaxPool2d-5 [-1, 64, 5, 5] 0
ReLU-6 [-1, 64, 5, 5] 0
Conv2d-7 [-1, 128, 4, 4] 32,896
MaxPool2d-8 [-1, 128, 2, 2] 0
ReLU-9 [-1, 128, 2, 2] 0
Conv2d-10 [-1, 128, 1, 1] 65,664
Dropout-11 [-1, 128, 1, 1] 0
Conv2d-12 [-1, 32, 24, 24] 832
MaxPool2d-13 [-1, 32, 12, 12] 0
ReLU-14 [-1, 32, 12, 12] 0
Conv2d-15 [-1, 64, 10, 10] 18,496
MaxPool2d-16 [-1, 64, 5, 5] 0
ReLU-17 [-1, 64, 5, 5] 0
Conv2d-18 [-1, 128, 4, 4] 32,896
MaxPool2d-19 [-1, 128, 2, 2] 0
ReLU-20 [-1, 128, 2, 2] 0
Conv2d-21 [-1, 128, 1, 1] 65,664
Dropout-22 [-1, 128, 1, 1] 0
Conv2d-23 [-1, 32, 24, 24] 832
MaxPool2d-24 [-1, 32, 12, 12] 0
ReLU-25 [-1, 32, 12, 12] 0
Conv2d-26 [-1, 64, 10, 10] 18,496
MaxPool2d-27 [-1, 64, 5, 5] 0
ReLU-28 [-1, 64, 5, 5] 0
Conv2d-29 [-1, 128, 4, 4] 32,896
MaxPool2d-30 [-1, 128, 2, 2] 0
ReLU-31 [-1, 128, 2, 2] 0
Conv2d-32 [-1, 128, 1, 1] 65,664
Dropout-33 [-1, 128, 1, 1] 0
================================================================
Total params: 353,664
Trainable params: 353,664
Non-trainable params: 0
----------------------------------------------------------------
Input size (MB): 1838.27
Forward/backward pass size (MB): 0.93
Params size (MB): 1.35
Estimated Total Size (MB): 1840.54
TripletNet's 2d feature representation space (epoch29)
- Seeing this result, I became to more exactly understand tripletnet's purpose.
AutoEncoder's 2d feature representation space (epoch 29)
reconstruction images