- VGG16-like deep models are tried.
No | Conditions | Min of val_loss | Max of val_accuracy | Score |
---|---|---|---|---|
Ref | filters=256 | 0.02154 (epochs=60) | 0.99512 (epochs=58) | 0.99532 (epochs=60) |
Ref | CNN1p/00 | 0.99553 (soft) | ||
00 | n_layers=2, filters=128 | 0.99435 (epochs=30) | ||
01 | n_layers=2, filters=128 | 0.02382 (epochs=46) | 0.99488 (epochs=38) | 0.99285 (epochs=46) |
- Same condition as 00, but watch
loss
to save data.
- Same condition as 01, and ensamble training is used.
- 00
- epochs=30 ; 0.99435
- 01
- epochs=46 ; 0.99285
- 02
- soft ensamble ; 0.99510
val_loss
is not stable. So it seems better to checkloss
instead ofval_loss
.
- When epochs > 40, seems over-fit.