You signed in with another tab or window. Reload to refresh your session.You signed out in another tab or window. Reload to refresh your session.You switched accounts on another tab or window. Reload to refresh your session.Dismiss alert
You report accuracy of 91% on the LFW data. The facenet paper reports 99+. Do you have any insight as to what they did differently vs your trained model? Just curious, thanks
The text was updated successfully, but these errors were encountered:
Yes, I think this is the key question and I don't have the final answer.
The typical answer is to use a bigger model and feed it more data. However this does not seem to be the solution in this case. The model is not really overfitting, i.e. validation accuracy is not significantly lower than the training accuracy. Also, the performance seems to increase when a smaller (and more shallow) model is used. This points in the direction that the network is not really converging very well. A possible remedy for this is to try for example residual networks (http://arxiv.org/abs/1512.03385). But I don't have any results for this yet.
You report accuracy of 91% on the LFW data. The facenet paper reports 99+. Do you have any insight as to what they did differently vs your trained model? Just curious, thanks
The text was updated successfully, but these errors were encountered: