Skip to content
New issue

Have a question about this project? Sign up for a free GitHub account to open an issue and contact its maintainers and the community.

By clicking “Sign up for GitHub”, you agree to our terms of service and privacy statement. We’ll occasionally send you account related emails.

Already on GitHub? Sign in to your account

About training result #17

Closed
czla opened this issue May 22, 2019 · 2 comments
Closed

About training result #17

czla opened this issue May 22, 2019 · 2 comments

Comments

@czla
Copy link

czla commented May 22, 2019

Hi there!
I train the SiamFCRes22 following the instruction, but the performance is much lower than yours (see below)

Model OTB2013(AUC)
SiamFCRes22checkpoint_e30 0.5981
SiamFCRes22checkpoint_e50 0.5770
CIResNet22-FC 0.663

And my questions are:

  • According the training details in the paper, you trained 50 epoches, so is X-checkpoint_50.pth the final model?
  • I use the default parameters(SiamFCRes22.yaml) to test, do i need to perform Param-Tune to tune my parameters for my X-checkpoint_e50.pth?

Thanks a lot, and here is my training log:

2019-05-07 20:25:27,742 Namespace(cfg='../experiments/train/SiamFC.yaml', gpus='0', workers=32)
2019-05-07 20:25:27,743 {'CHECKPOINT_DIR': 'snapshot',
'GPUS': '0',
'OUTPUT_DIR': 'logs',
'PRINT_FREQ': 10,
'SIAMFC': {'DATASET': {'BLUR': 0,
'COLOR': 1,
'FLIP': 0,
'GOT10K': {'ANNOTATION': '/home/tjcv/dataset/SiamDW_trainset/GOT10K/train.json',
'PATH': '/home/tjcv/dataset/SiamDW_trainset/GOT10K/crop255'},
'ROTATION': 0,
'SCALE': 0.05,
'SHIFT': 4,
'VID': {'ANNOTATION': '/home/tjcv/dataset/SiamDW_trainset/VID/train.json',
'PATH': '/home/tjcv/dataset/SiamDW_trainset/VID/crop255'}},
'TEST': {'DATA': 'OTB2015',
'END_EPOCH': 50,
'MODEL': 'SiamFCIncep22',
'START_EPOCH': 30},
'TRAIN': {'BATCH': 32,
'END_EPOCH': 50,
'LR': 0.001,
'LR_END': 1e-07,
'LR_POLICY': 'log',
'MODEL': 'SiamFCRes22',
'MOMENTUM': 0.9,
'PAIRS': 600000,
'PRETRAIN': '../pretrain/CIResNet22_PRETRAIN.model',
'RESUME': False,
'SEARCH_SIZE': 255,
'START_EPOCH': 0,
'STRIDE': 8,
'TEMPLATE_SIZE': 127,
'WEIGHT_DECAY': 0.0001,
'WHICH_USE': 'VID'},
'TUNE': {'DATA': 'OTB2015',
'METHOD': 'GENE',
'MODEL': 'SiamFCIncep22'}},
'WORKERS': 32}
2019-05-07 20:25:30,937 trainable params:
2019-05-07 20:25:30,937 features.features.conv1.weight
2019-05-07 20:25:30,937 features.features.bn1.weight
2019-05-07 20:25:30,937 features.features.bn1.bias
2019-05-07 20:25:30,938 features.features.layer1.0.conv1.weight
2019-05-07 20:25:30,938 features.features.layer1.0.bn1.weight
2019-05-07 20:25:30,938 features.features.layer1.0.bn1.bias
2019-05-07 20:25:30,938 features.features.layer1.0.conv2.weight
2019-05-07 20:25:30,938 features.features.layer1.0.bn2.weight
2019-05-07 20:25:30,938 features.features.layer1.0.bn2.bias
2019-05-07 20:25:30,938 features.features.layer1.0.conv3.weight
2019-05-07 20:25:30,938 features.features.layer1.0.bn3.weight
2019-05-07 20:25:30,938 features.features.layer1.0.bn3.bias
2019-05-07 20:25:30,938 features.features.layer1.0.downsample.0.weight
2019-05-07 20:25:30,938 features.features.layer1.0.downsample.1.weight
2019-05-07 20:25:30,938 features.features.layer1.0.downsample.1.bias
2019-05-07 20:25:30,938 features.features.layer1.1.conv1.weight
2019-05-07 20:25:30,938 features.features.layer1.1.bn1.weight
2019-05-07 20:25:30,938 features.features.layer1.1.bn1.bias
2019-05-07 20:25:30,938 features.features.layer1.1.conv2.weight
2019-05-07 20:25:30,938 features.features.layer1.1.bn2.weight
2019-05-07 20:25:30,938 features.features.layer1.1.bn2.bias
2019-05-07 20:25:30,938 features.features.layer1.1.conv3.weight
2019-05-07 20:25:30,938 features.features.layer1.1.bn3.weight
2019-05-07 20:25:30,938 features.features.layer1.1.bn3.bias
2019-05-07 20:25:30,938 features.features.layer1.2.conv1.weight
2019-05-07 20:25:30,939 features.features.layer1.2.bn1.weight
2019-05-07 20:25:30,939 features.features.layer1.2.bn1.bias
2019-05-07 20:25:30,939 features.features.layer1.2.conv2.weight
2019-05-07 20:25:30,939 features.features.layer1.2.bn2.weight
2019-05-07 20:25:30,939 features.features.layer1.2.bn2.bias
2019-05-07 20:25:30,939 features.features.layer1.2.conv3.weight
2019-05-07 20:25:30,939 features.features.layer1.2.bn3.weight
2019-05-07 20:25:30,939 features.features.layer1.2.bn3.bias
2019-05-07 20:25:30,939 features.features.layer2.0.conv1.weight
2019-05-07 20:25:30,939 features.features.layer2.0.bn1.weight
2019-05-07 20:25:30,939 features.features.layer2.0.bn1.bias
2019-05-07 20:25:30,939 features.features.layer2.0.conv2.weight
2019-05-07 20:25:30,939 features.features.layer2.0.bn2.weight
2019-05-07 20:25:30,939 features.features.layer2.0.bn2.bias
2019-05-07 20:25:30,939 features.features.layer2.0.conv3.weight
2019-05-07 20:25:30,939 features.features.layer2.0.bn3.weight
2019-05-07 20:25:30,939 features.features.layer2.0.bn3.bias
2019-05-07 20:25:30,939 features.features.layer2.0.downsample.0.weight
2019-05-07 20:25:30,939 features.features.layer2.0.downsample.1.weight
2019-05-07 20:25:30,939 features.features.layer2.0.downsample.1.bias
2019-05-07 20:25:30,939 features.features.layer2.2.conv1.weight
2019-05-07 20:25:30,939 features.features.layer2.2.bn1.weight
2019-05-07 20:25:30,940 features.features.layer2.2.bn1.bias
2019-05-07 20:25:30,940 features.features.layer2.2.conv2.weight
2019-05-07 20:25:30,940 features.features.layer2.2.bn2.weight
2019-05-07 20:25:30,940 features.features.layer2.2.bn2.bias
2019-05-07 20:25:30,940 features.features.layer2.2.conv3.weight
2019-05-07 20:25:30,940 features.features.layer2.2.bn3.weight
2019-05-07 20:25:30,940 features.features.layer2.2.bn3.bias
2019-05-07 20:25:30,940 features.features.layer2.3.conv1.weight
2019-05-07 20:25:30,940 features.features.layer2.3.bn1.weight
2019-05-07 20:25:30,940 features.features.layer2.3.bn1.bias
2019-05-07 20:25:30,940 features.features.layer2.3.conv2.weight
2019-05-07 20:25:30,940 features.features.layer2.3.bn2.weight
2019-05-07 20:25:30,940 features.features.layer2.3.bn2.bias
2019-05-07 20:25:30,940 features.features.layer2.3.conv3.weight
2019-05-07 20:25:30,940 features.features.layer2.3.bn3.weight
2019-05-07 20:25:30,940 features.features.layer2.3.bn3.bias
2019-05-07 20:25:30,940 features.features.layer2.4.conv1.weight
2019-05-07 20:25:30,940 features.features.layer2.4.bn1.weight
2019-05-07 20:25:30,940 features.features.layer2.4.bn1.bias
2019-05-07 20:25:30,940 features.features.layer2.4.conv2.weight
2019-05-07 20:25:30,940 features.features.layer2.4.bn2.weight
2019-05-07 20:25:30,940 features.features.layer2.4.bn2.bias
2019-05-07 20:25:30,940 features.features.layer2.4.conv3.weight
2019-05-07 20:25:30,940 features.features.layer2.4.bn3.weight
2019-05-07 20:25:30,941 features.features.layer2.4.bn3.bias
2019-05-07 20:25:30,941 GPU NUM: 1
2019-05-07 20:25:30,945 model prepare done
2019-05-07 20:25:40,947 Epoch: [1][10/18750] lr: 0.0010000 Batch Time: 0.642s Data Time:0.334s Loss:11.23705
2019-05-07 20:25:40,947 Progress: 10 / 937500 [0%], Speed: 0.642 s/iter, ETA 6:23:12 (D:H:M)

2019-05-07 20:25:40,947
PROGRESS: 0.00%

2019-05-07 20:25:43,821 Epoch: [1][20/18750] lr: 0.0010000 Batch Time: 0.465s Data Time:0.167s Loss:8.01549
2019-05-07 20:25:43,821 Progress: 20 / 937500 [0%], Speed: 0.465 s/iter, ETA 5:01:01 (D:H:M)

2019-05-07 20:25:43,821
PROGRESS: 0.00%

2019-05-07 20:25:46,703 Epoch: [1][30/18750] lr: 0.0010000 Batch Time: 0.406s Data Time:0.112s Loss:5.91713
2019-05-07 20:25:46,703 Progress: 30 / 937500 [0%], Speed: 0.406 s/iter, ETA 4:09:41 (D:H:M)

2019-05-07 20:25:46,703
PROGRESS: 0.00%

2019-05-07 20:25:49,628 Epoch: [1][40/18750] lr: 0.0010000 Batch Time: 0.378s Data Time:0.084s Loss:4.73139
2019-05-07 20:25:49,629 Progress: 40 / 937500 [0%], Speed: 0.378 s/iter, ETA 4:02:19 (D:H:M)

2019-05-07 20:25:49,629
PROGRESS: 0.00%

2019-05-07 20:25:52,498 Epoch: [1][50/18750] lr: 0.0010000 Batch Time: 0.359s Data Time:0.067s Loss:3.99356
2019-05-07 20:25:52,498 Progress: 50 / 937500 [0%], Speed: 0.359 s/iter, ETA 3:21:35 (D:H:M)

2019-05-07 20:25:52,498
PROGRESS: 0.01%
...

@JudasDie
Copy link
Contributor

Hi there!
I train the SiamFCRes22 following the instruction, but the performance is much lower than yours (see below)

Model OTB2013(AUC)
SiamFCRes22checkpoint_e30 0.5981
SiamFCRes22checkpoint_e50 0.5770
CIResNet22-FC 0.663
And my questions are:

  • According the training details in the paper, you trained 50 epoches, so is X-checkpoint_50.pth the final model?
  • I use the default parameters(SiamFCRes22.yaml) to test, do i need to perform Param-Tune to tune my parameters for my X-checkpoint_e50.pth?

Thanks a lot, and here is my training log:

2019-05-07 20:25:27,742 Namespace(cfg='../experiments/train/SiamFC.yaml', gpus='0', workers=32)
2019-05-07 20:25:27,743 {'CHECKPOINT_DIR': 'snapshot',
'GPUS': '0',
'OUTPUT_DIR': 'logs',
'PRINT_FREQ': 10,
'SIAMFC': {'DATASET': {'BLUR': 0,
'COLOR': 1,
'FLIP': 0,
'GOT10K': {'ANNOTATION': '/home/tjcv/dataset/SiamDW_trainset/GOT10K/train.json',
'PATH': '/home/tjcv/dataset/SiamDW_trainset/GOT10K/crop255'},
'ROTATION': 0,
'SCALE': 0.05,
'SHIFT': 4,
'VID': {'ANNOTATION': '/home/tjcv/dataset/SiamDW_trainset/VID/train.json',
'PATH': '/home/tjcv/dataset/SiamDW_trainset/VID/crop255'}},
'TEST': {'DATA': 'OTB2015',
'END_EPOCH': 50,
'MODEL': 'SiamFCIncep22',
'START_EPOCH': 30},
'TRAIN': {'BATCH': 32,
'END_EPOCH': 50,
'LR': 0.001,
'LR_END': 1e-07,
'LR_POLICY': 'log',
'MODEL': 'SiamFCRes22',
'MOMENTUM': 0.9,
'PAIRS': 600000,
'PRETRAIN': '../pretrain/CIResNet22_PRETRAIN.model',
'RESUME': False,
'SEARCH_SIZE': 255,
'START_EPOCH': 0,
'STRIDE': 8,
'TEMPLATE_SIZE': 127,
'WEIGHT_DECAY': 0.0001,
'WHICH_USE': 'VID'},
'TUNE': {'DATA': 'OTB2015',
'METHOD': 'GENE',
'MODEL': 'SiamFCIncep22'}},
'WORKERS': 32}
2019-05-07 20:25:30,937 trainable params:
2019-05-07 20:25:30,937 features.features.conv1.weight
2019-05-07 20:25:30,937 features.features.bn1.weight
2019-05-07 20:25:30,937 features.features.bn1.bias
2019-05-07 20:25:30,938 features.features.layer1.0.conv1.weight
2019-05-07 20:25:30,938 features.features.layer1.0.bn1.weight
2019-05-07 20:25:30,938 features.features.layer1.0.bn1.bias
2019-05-07 20:25:30,938 features.features.layer1.0.conv2.weight
2019-05-07 20:25:30,938 features.features.layer1.0.bn2.weight
2019-05-07 20:25:30,938 features.features.layer1.0.bn2.bias
2019-05-07 20:25:30,938 features.features.layer1.0.conv3.weight
2019-05-07 20:25:30,938 features.features.layer1.0.bn3.weight
2019-05-07 20:25:30,938 features.features.layer1.0.bn3.bias
2019-05-07 20:25:30,938 features.features.layer1.0.downsample.0.weight
2019-05-07 20:25:30,938 features.features.layer1.0.downsample.1.weight
2019-05-07 20:25:30,938 features.features.layer1.0.downsample.1.bias
2019-05-07 20:25:30,938 features.features.layer1.1.conv1.weight
2019-05-07 20:25:30,938 features.features.layer1.1.bn1.weight
2019-05-07 20:25:30,938 features.features.layer1.1.bn1.bias
2019-05-07 20:25:30,938 features.features.layer1.1.conv2.weight
2019-05-07 20:25:30,938 features.features.layer1.1.bn2.weight
2019-05-07 20:25:30,938 features.features.layer1.1.bn2.bias
2019-05-07 20:25:30,938 features.features.layer1.1.conv3.weight
2019-05-07 20:25:30,938 features.features.layer1.1.bn3.weight
2019-05-07 20:25:30,938 features.features.layer1.1.bn3.bias
2019-05-07 20:25:30,938 features.features.layer1.2.conv1.weight
2019-05-07 20:25:30,939 features.features.layer1.2.bn1.weight
2019-05-07 20:25:30,939 features.features.layer1.2.bn1.bias
2019-05-07 20:25:30,939 features.features.layer1.2.conv2.weight
2019-05-07 20:25:30,939 features.features.layer1.2.bn2.weight
2019-05-07 20:25:30,939 features.features.layer1.2.bn2.bias
2019-05-07 20:25:30,939 features.features.layer1.2.conv3.weight
2019-05-07 20:25:30,939 features.features.layer1.2.bn3.weight
2019-05-07 20:25:30,939 features.features.layer1.2.bn3.bias
2019-05-07 20:25:30,939 features.features.layer2.0.conv1.weight
2019-05-07 20:25:30,939 features.features.layer2.0.bn1.weight
2019-05-07 20:25:30,939 features.features.layer2.0.bn1.bias
2019-05-07 20:25:30,939 features.features.layer2.0.conv2.weight
2019-05-07 20:25:30,939 features.features.layer2.0.bn2.weight
2019-05-07 20:25:30,939 features.features.layer2.0.bn2.bias
2019-05-07 20:25:30,939 features.features.layer2.0.conv3.weight
2019-05-07 20:25:30,939 features.features.layer2.0.bn3.weight
2019-05-07 20:25:30,939 features.features.layer2.0.bn3.bias
2019-05-07 20:25:30,939 features.features.layer2.0.downsample.0.weight
2019-05-07 20:25:30,939 features.features.layer2.0.downsample.1.weight
2019-05-07 20:25:30,939 features.features.layer2.0.downsample.1.bias
2019-05-07 20:25:30,939 features.features.layer2.2.conv1.weight
2019-05-07 20:25:30,939 features.features.layer2.2.bn1.weight
2019-05-07 20:25:30,940 features.features.layer2.2.bn1.bias
2019-05-07 20:25:30,940 features.features.layer2.2.conv2.weight
2019-05-07 20:25:30,940 features.features.layer2.2.bn2.weight
2019-05-07 20:25:30,940 features.features.layer2.2.bn2.bias
2019-05-07 20:25:30,940 features.features.layer2.2.conv3.weight
2019-05-07 20:25:30,940 features.features.layer2.2.bn3.weight
2019-05-07 20:25:30,940 features.features.layer2.2.bn3.bias
2019-05-07 20:25:30,940 features.features.layer2.3.conv1.weight
2019-05-07 20:25:30,940 features.features.layer2.3.bn1.weight
2019-05-07 20:25:30,940 features.features.layer2.3.bn1.bias
2019-05-07 20:25:30,940 features.features.layer2.3.conv2.weight
2019-05-07 20:25:30,940 features.features.layer2.3.bn2.weight
2019-05-07 20:25:30,940 features.features.layer2.3.bn2.bias
2019-05-07 20:25:30,940 features.features.layer2.3.conv3.weight
2019-05-07 20:25:30,940 features.features.layer2.3.bn3.weight
2019-05-07 20:25:30,940 features.features.layer2.3.bn3.bias
2019-05-07 20:25:30,940 features.features.layer2.4.conv1.weight
2019-05-07 20:25:30,940 features.features.layer2.4.bn1.weight
2019-05-07 20:25:30,940 features.features.layer2.4.bn1.bias
2019-05-07 20:25:30,940 features.features.layer2.4.conv2.weight
2019-05-07 20:25:30,940 features.features.layer2.4.bn2.weight
2019-05-07 20:25:30,940 features.features.layer2.4.bn2.bias
2019-05-07 20:25:30,940 features.features.layer2.4.conv3.weight
2019-05-07 20:25:30,940 features.features.layer2.4.bn3.weight
2019-05-07 20:25:30,941 features.features.layer2.4.bn3.bias
2019-05-07 20:25:30,941 GPU NUM: 1
2019-05-07 20:25:30,945 model prepare done
2019-05-07 20:25:40,947 Epoch: [1][10/18750] lr: 0.0010000 Batch Time: 0.642s Data Time:0.334s Loss:11.23705
2019-05-07 20:25:40,947 Progress: 10 / 937500 [0%], Speed: 0.642 s/iter, ETA 6:23:12 (D:H:M)
2019-05-07 20:25:40,947
PROGRESS: 0.00%
2019-05-07 20:25:43,821 Epoch: [1][20/18750] lr: 0.0010000 Batch Time: 0.465s Data Time:0.167s Loss:8.01549
2019-05-07 20:25:43,821 Progress: 20 / 937500 [0%], Speed: 0.465 s/iter, ETA 5:01:01 (D:H:M)
2019-05-07 20:25:43,821
PROGRESS: 0.00%
2019-05-07 20:25:46,703 Epoch: [1][30/18750] lr: 0.0010000 Batch Time: 0.406s Data Time:0.112s Loss:5.91713
2019-05-07 20:25:46,703 Progress: 30 / 937500 [0%], Speed: 0.406 s/iter, ETA 4:09:41 (D:H:M)
2019-05-07 20:25:46,703
PROGRESS: 0.00%
2019-05-07 20:25:49,628 Epoch: [1][40/18750] lr: 0.0010000 Batch Time: 0.378s Data Time:0.084s Loss:4.73139
2019-05-07 20:25:49,629 Progress: 40 / 937500 [0%], Speed: 0.378 s/iter, ETA 4:02:19 (D:H:M)
2019-05-07 20:25:49,629
PROGRESS: 0.00%
2019-05-07 20:25:52,498 Epoch: [1][50/18750] lr: 0.0010000 Batch Time: 0.359s Data Time:0.067s Loss:3.99356
2019-05-07 20:25:52,498 Progress: 50 / 937500 [0%], Speed: 0.359 s/iter, ETA 3:21:35 (D:H:M)
2019-05-07 20:25:52,498
PROGRESS: 0.01%
...

Thanks for your interest to our work.

  1. I chose the best epoch rather than the last epoch (50th). In fact, the last epoch is always overfitting.
    You can see 'onekey.py' or readme to find how to choose best epoch.
  2. I don't think my default hyper-parameter will work well on your model. You can chose epoch with hyper-parameter in 'lib/tracker/siamfc.py' (class TrackerConfig). After chose hyper-parameter, you should tune hyper-parameter on validation dataset. You can find all public siamese models(papers) are sensitive to hyper-parameter. We'r working on this for siamese models.
  3. I found your traning process is slow. You can release layers gradually as details in my paper. That will save you 20%-50% time. The defaut setting in code can also give a comparable result.
  4. Any else questions you can email me for further talk (zhangzhipeng2017@ia.ac.cn). If you want to reproduce siamese, you should pay attention to all details.

@czla
Copy link
Author

czla commented May 22, 2019

Thank you very much! @JudasDie
Actually the upper results are trained under the default setting in the code, and it took almost three days to finish 50 epochs(with one 1080 GPU), I will look into the hyper-parameter tuning and email you afterwards.

@czla czla closed this as completed Jun 1, 2019
Sign up for free to join this conversation on GitHub. Already have an account? Sign in to comment
Labels
None yet
Projects
None yet
Development

No branches or pull requests

2 participants