Skip to content
New issue

Have a question about this project? Sign up for a free GitHub account to open an issue and contact its maintainers and the community.

By clicking “Sign up for GitHub”, you agree to our terms of service and privacy statement. We’ll occasionally send you account related emails.

Already on GitHub? Sign in to your account

Is the model structure different from the original paper? #8

Open
ys198918 opened this issue Nov 5, 2018 · 2 comments
Open

Is the model structure different from the original paper? #8

ys198918 opened this issue Nov 5, 2018 · 2 comments

Comments

@ys198918
Copy link

ys198918 commented Nov 5, 2018

In your code, you use the global feature with 256d to do classification, and the paper said they use global features before reduction which is 2048d. Is that a mistake? or you did that for a reason?And by the way, in your code , you use adam with lr 0.0002 which is also different from the paper. Can you tell me why,? In my attemption , it seems like your setting is better than what the paper says.

@whut2962575697
Copy link

Hello, I found the same problem. Have you tried to use the 2048-dimensional feature before dimension reduction to classify ? Why use the settings in the code, the cross entropy loss is difficult to converge when training

@muzishen
Copy link

Hello, l found the same problems. Have you tried to use the BNNeck in the global feature. 2048-d fed to triplet and after BN fed to softmax.

Sign up for free to join this conversation on GitHub. Already have an account? Sign in to comment
Labels
None yet
Projects
None yet
Development

No branches or pull requests

3 participants