Skip to content

The 1st Place Solution of the Google Landmark 2019 Retrieval Challenge and the 3rd Place Solution of the Recognition Challenge.

License

Notifications You must be signed in to change notification settings

AbyssGaze/Landmark2019-1st-and-3rd-Place-Solution

 
 

Folders and files

NameName
Last commit message
Last commit date

Latest commit

 

History

28 Commits
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 

Repository files navigation

Landmark2019-1st-and-3rd-Place-Solution

pipeline

The 1st Place Solution of the Google Landmark 2019 Retrieval Challenge and the 3rd Place Solution of the Recognition Challenge.

Our solution has been published! You can check from: Large-scale Landmark Retrieval/Recognition under a Noisy and Diverse Dataset

Environments

You can reproduce our environments using Dockerfile provided here https://github.com/lyakaap/Landmark2019-1st-and-3rd-Place-Solution/blob/master/docker/Dockerfile

Data

Dataset statistics:

Dataset (train split) # Samples # Labels
GLD-v1 1,225,029 14,951
GLD-v2 4,132,914 203,094
GLD-v2 (clean) 1,580,470 81,313

Prepare cleaned subset

(You can skip this procedure to generate a cleaned subset. Pre-computed files are available on kaggle dataset.)

Run scripts/prepare_cleaned_subset.sh for cleaning the GLD-v2 dataset. The cleaning code requires DELF library (install instructions).

Reproduce

Prepare FishNet pretrained checkpoints first.

  1. Download the checkpoint of FishNet-150 from https://www.dropbox.com/s/ajy9p6f97y45f1r/fishnet150_ckpt_welltrained.tar?dl=0
  2. Place the checkpoint src/FishNet/checkpoints/
  3. Execute src/FishNet/fix_checkpoint.py

Following commands are for reproducing our results.

cd ./experiments/
bash download_train.sh # download data
python ../src/prepare_dataset.py  # setup data to ready for training
bash reproduce.sh  # train models and predict for reproducing

python submit_retrieval.py  # for retrieval challenge
python submit_recognition.py  # for recognition challenge

Experiments

Model training and inference are done in experiments/ directory.

# train models by various parameter settings with 4 gpus (each training is done with 2 gpus).
python vX.py tuning -d 0,1,2,3 --n-gpu 2

# predict
python vX.py predict -m vX/epX.pth -d 0
# predict with multiple gpus
python vX.py multigpu-predict -m vX/epX.pth --scale L2 --ms -b 32 -d 0,1

Results

Place Team Private Public
1st smlyaka (ours) 37.23 35.69
2nd imagesearch 34.75 32.25
3rd Layer 6 AI 32.18 29.85

Reference

About

The 1st Place Solution of the Google Landmark 2019 Retrieval Challenge and the 3rd Place Solution of the Recognition Challenge.

Resources

License

Stars

Watchers

Forks

Releases

No releases published

Packages

No packages published

Languages

  • Python 99.0%
  • Other 1.0%