Skip to content

01BB01/eBayChallenge

Repository files navigation

Large-Scale Product Retrieval with Weakly Supervised Representation Learning

PyTorch Lightning Config: Hydra Template
Conference Paper Leaderboard Certificate

Description

The second place solution (Involution King) for 2nd eBay eProduct Visual Search Challenge (FGVC9-CVPR2022).

How to run

Organize dataset as following under ./data/eBay/

├── Images
│   ├── index
│   ├── query_part1
│   ├── train
│   └── val
└── metadata
    ├── index.csv
    ├── query_part1.csv
    ├── train.csv
    └── val.csv

Install dependencies

# clone project
git clone https://github.com/01BB01/eBayChallenge.git

# create conda environment
conda create -n ebay python=3.8
conda activate ebay

# install requirements
pip install -r requirements.txt

# install hooks
pre-commit install

# set eval.ai CLI
evalai set_token eyJ0eXAiOiJKV1QiLCJhbGciOiJIUzI1NiJ9.eyJ0b2tlbl90eXBlIjoicmVmcmVzaCIsImV4cCI6MTY3Nzg0MDYxMCwianRpIjoiYjM5MjcyNmViZjQ4NDNlODgyZDE5M2I2MzJmMTE3NDgiLCJ1c2VyX2lkIjoxODkxNX0.kemV9j0kiX6is1h-Y1P2NT93_Sxl0CuYN3N_F7A1W2w

Train model with default configuration

# train on CPU
python train.py trainer.gpus=0

# train on single GPU
python train.py trainer.gpus=1

# train on multiple GPUs
python train.py trainer.gpus=4

Train model with chosen experiment configuration from configs/experiment/

python train.py experiment=experiment_name.yaml

You can override any parameter from command line like this

python train.py trainer.max_epochs=20 datamodule.batch_size=64

You can visualize running experiments here https://wandb.ai/01bb01/fgvc9_ebay_challenge

You can do inference like this

python test.py datamodule.batch_size=1024 datamodule.num_workers=4 ckpt_path=<path to ckpt>

You can submit result via eval.ai CLI like this

evalai challenge 1541 phase 3084 submit --file <submission_file_path>

About

[FGVC9-CVPR 2022] The second place solution for 2nd eBay eProduct Visual Search Challenge.

Resources

Stars

Watchers

Forks

Packages

No packages published