Skip to content
New issue

Have a question about this project? Sign up for a free GitHub account to open an issue and contact its maintainers and the community.

By clicking “Sign up for GitHub”, you agree to our terms of service and privacy statement. We’ll occasionally send you account related emails.

Already on GitHub? Sign in to your account

Reproduzir "Supervised Contrastive Learning for Facial Kinship Recognition" #26

Closed
vitalwarley opened this issue Jul 26, 2023 · 4 comments
Assignees

Comments

@vitalwarley
Copy link
Owner

#25 (comment)

@vitalwarley vitalwarley self-assigned this Jul 26, 2023
@vitalwarley
Copy link
Owner Author

Fiz apenas algumas modificações em alguns caminhos. Atualmente em treino

╚═╡(rfiw2021) [20:35] λ python Track1/train.py --batch_size 20 --sample Track1/sample0  --save_path Track1/model_track1.pth --epochs 80 --beta 0.08 --log_path Track1/log_name.txt --gpu 0

Os autores usaram bs = 25, mas os 8GB da minha RTX 3070 não aguentaram. Usei bs = 20. Em ~13 minutos temos 10 épocas completas. Os autores afirmam que alcançaram os melhores resultados na 42o época.

Ao que parece, usaram o mesmo modelo do Shadrikov (2021) (#24)

We use the ArcFace pre-trained ResNet101 as the feature extraction network, and the code is mainly from here.The models were pre-trained using mxnet, and we used MMdnn to convert the pre-trained models into pytorch version and Tensorflow 2.0, corresponding to backbone/kit_resnet101.pkl and backbone/ArcFace_r100_v1.h5, respectively.

@vitalwarley
Copy link
Owner Author

plot

@vitalwarley
Copy link
Owner Author

╚═╡(rfiw2021) [21:38] λ python Track1/find.py --batch_size 40 --sample Track1/sample0  --save_path Track1/model_track1.pth --log_path Track1/log_name.txt --gpu 0
 ...
auc :  0.8670577181250507
threshold : 0.11546290665864944

Por fim,

╚═╡(rfiw2021) [21:57] λ python Track1/test.py  --sample Track1/sample0 --save_path Track1/model_track1.pth --threshold 0.11546290665864944 --batch_size 40 --log_path Track1/log_name.txt --gpu 0 
  0%|                                                                                                                                                                                                                                            994it [06:49,  2.43it/s]                                                                                                                                                                                                                                                   
bb : 0.8109830508474576
ss : 0.8119018618409605
sibs : 0.786174575278266
fd : 0.7391447911507347
md : 0.7913296667745067
fs : 0.8236641221374046
ms : 0.7614752455401884
gfgd : 0.7742663656884876
gmgd : 0.7509293680297398
gfgs : 0.7224489795918367
gmgs : 0.6033519553072626
avg : 0.7896233298945726

image

Podemos ver que não consegui reproduzir os resultados exatos... talvez por causa da diferença no batch_size?

@vitalwarley
Copy link
Owner Author

vitalwarley commented Aug 1, 2023

Localmente, consegui 0.865929 no treino com bs=20 com val_choose (val set para seleção do modelo). Na RIG, consegui 0.864550, um pouco menos, com bs=20 na época 74 (vs. época 16 aqui). Eu cancelei o treinamento na época 55, mas na RIG2 o deixei até o fim; dado que o modelo não foi salvo quando AUC não melhorou, não faz diferença.

plot_rig2

No conjunto de validação para seleção do limiar, obtive

auc :  0.8633823723879341
threshold : 0.11321651935577393

Por fim, no conjunto de teste

bb : 0.8065084745762712
ss : 0.8134678962937184
sibs : 0.789103690685413
fd : 0.7586263826977051
md : 0.7835651892591394
fs : 0.8112977099236641
ms : 0.7716977350170375
gfgd : 0.7787810383747178
gmgd : 0.7360594795539034
gfgs : 0.6571428571428571
gmgs : 0.6368715083798883
avg : 0.7898497848677755

A média foi basicamente a mesma que a minha.

Sign up for free to join this conversation on GitHub. Already have an account? Sign in to comment
Labels
None yet
Projects
None yet
Development

No branches or pull requests

1 participant