Skip to content

Pytorch Implementation of paper Attention-based Ensemble forDeep Metric Learning

Notifications You must be signed in to change notification settings

andang1390/AttentionBasedEmbeddingForMetricLearning

 
 

Repository files navigation

Note on fork

Tried to tweak code to work on deepfashion1 INSHOP dataset, which results was shown on in original paper.

Note: Didnt finish tweaking the code to get the numbers from original paper.

AttentionBasedEmbeddingForMetricLearning

Pytorch Implementation of paper Attention-based Ensemble for Deep Metric Learning

Major difference from the paper: attention maps are not followed by a sigmoid activation function and minmax norm are used instead.

The weighted sampling module code is copied from suruoxi/DistanceWeightedSampling

performance on Stanford Cars 196: 71.4% recall@1 86.9% recall@4 (8 attentions and size of each embedding is 64)

TODO:

transform attention map: att_maps = sign(att_maps) * sqrt(abs(att_maps)) before normalizing. (Motivated by tau-yihouxiang/WSDAN)

Will update here if I got better validation performance

About

Pytorch Implementation of paper Attention-based Ensemble forDeep Metric Learning

Resources

Stars

Watchers

Forks

Releases

No releases published

Packages

No packages published

Languages

  • Python 99.1%
  • Jupyter Notebook 0.9%