-
Notifications
You must be signed in to change notification settings - Fork 252
New issue
Have a question about this project? Sign up for a free GitHub account to open an issue and contact its maintainers and the community.
By clicking “Sign up for GitHub”, you agree to our terms of service and privacy statement. We’ll occasionally send you account related emails.
Already on GitHub? Sign in to your account
Do you know why the accuracy is lower than the original project #28
Comments
The author use 100 layer network. Please compare with the |
very strange problem? i cannot train well using this project, my best accuracy 96% on lfw, so i changed my training method which based on amloss ,as follows: 1. use the model which had trained well by others. 2. keep inference logit as set trainable=false 2. only train param 'w' in arcface loss part. but get bad result, the inference loss is alway about 25. And i have trained "Additive-Margin-Softmax", i get accuracy 99.3% very easy. so i doubt the 'arc face loss' really wok? |
@siahewei |
@ruobop let us know when you get better results. Thanks. |
@auroua In the TF document, It says that using NCHW is better than using NHWC in training mode with CUDA, and NHWC in inference mode. |
I come into the same problem with @siahewei . I use 96x96 images to train a model, but it seems to up to the bottleneck 0.96 in lfw test. |
@auroua E.g., the ACC for LFW in original project could go up to 99.8%. Do you know what's missing here?
The text was updated successfully, but these errors were encountered: