New issue
Have a question about this project? Sign up for a free GitHub account to open an issue and contact its maintainers and the community.
By clicking “Sign up for GitHub”, you agree to our terms of service and privacy statement. We’ll occasionally send you account related emails.
Already on GitHub? Sign in to your account
acc is always about 0.5 using mobilenetface #189
Comments
I have no GPU server to test it right now. The author told me that he did the experiments by fine-tuning(train softmax firstly and fine-tuning with arcface loss). |
To pursue ultimate performance, MobileFaceNet, MobileFaceNet (112 × 96), and MobileFaceNet (96 × 96) are further trained by ArcFace loss on the cleaned training set of MS-Celeb-1M database [5] with 3.8M images from 85K subjects. |
I use ms1m train the mobilenetface network with softmax, the first testing verification is still 0.5 |
@tianxingyzxq I face the same problem.However,i got different after several hours. |
The same with you, always 0.5. |
I will provide a pretrained model soon. |
@lmmcc I face the same problem when training with resnet-101. acc is low but lfw is good. |
@lmmcc The same problem with you! lfw accuracy=0.991,but train accuracy=0.000000. I wonder to know whether you solver the issue? |
@nttstar when I trained mobilefacenet with two 1080ti GPUs, and set batch_size to 256 per gpu. Through 20000 batchs, the lfw accuracy=0.991,but train accuracy=0.00000. I don't know if this phenomenon is normal? If it is a problem, do you know what caused this phenomenon? Thanks! |
@wsx276166228 What's the final accuracy of lfw,cfp_fp,agedb_30 you finally got? |
@nttstar Hi, did you get a chance to upload the pretrained model for MobileFaceNet? Similar Problem here. Thanks. |
@Wisgon lfw=0.985 cfp_fp=0.854 agedb_30=0.921 |
+1 |
I have change the learning rate from 0.1(by default) to 0.01, but the acc is still 0.And I'm still running to see the final result. |
@ShiyangZhang what's you mean? what's the learing rate you set? Thanks! |
@wsx276166228 I forgot learning rate decay according to the original paper. Now I got a much better accuracy of lfw,cfp_fp,agedb_30. But it is not good enough,maybe my batch size(364) is too small. I'll report the accuracy latter. |
@wsx276166228 The same problem, do you have solved this problem? |
The same with you, always near 0.5.anyone knows how to solved it? |
Same problem. Anyone got any idea? |
@tianxingyzxq @nttstar i face the same question, the three valid data is always 0.5,and i check the params file of the model, i found that some layer`s paramters is near to zero,so the model can not be trained any more. how did you sovle this question?? thanks |
testing verification..
(12000, 128)
infer time 30.116359
[lfw][6000]XNorm: 38.367005
[lfw][6000]Accuracy-Flip: 0.50000+-0.00000
testing verification..
(14000, 128)
infer time 35.065952
[cfp_fp][6000]XNorm: 38.365932
[cfp_fp][6000]Accuracy-Flip: 0.50000+-0.00000
testing verification..
(12000, 128)
infer time 30.366434
[agedb_30][6000]XNorm: 38.366582
[agedb_30][6000]Accuracy-Flip: 0.50000+-0.00000
[6000]Accuracy-Highest: 0.51533
the train script is
CUDA_VISIBLE_DEVICES='0,1,2,3' python -u train_softmax.py --network y1 --loss-type 4 --margin-s 128 --margin-m 0.5 --per-batch-size 128 --emb-size 128 --data-dir ../datasets/faces_ms1m_112x112 --wd 0.00004 --fc7-wd-mult 10.0 --prefix ../model-mobilefacenet-128
The text was updated successfully, but these errors were encountered: