Skip to content

dimitrismallis/KeypointsToLandmarks

Repository files navigation

From Keypoints to Object Landmarks via Self-Training Correspondence: A novel approach to Unsupervised Landmark Discovery

Dimitrios Mallis, Enrique Sanchez, Matt Bell, Georgios Tzimiropoulos

This repository contains the training and evaluation code for paper "From Keypoints to Object Landmarks via Self-Training Correspondence: A novel approach to Unsupervised Landmark Discovery". This sofware learns a deep landmark detector, directly from raw images of a specific object category, without requiring any manual annotations.

This work was accepted for publication in TPAMI!!

alt text

Data Preparation

CelebA

CelebA can be found here. Download the .zip file inside an empty directory and unzip. We provide precomputed bounding boxes and 68-point annotations (for evaluation only) in data/CelebA.

LS3D

We use 300W-LP database for training and LS3D-balanced for evaluation. Download the files in 2 seperate empty directories and unzip. We provide precomputed bounding boxes for 300W-LP in data/LS3D.

Installation

You require a reasonable CUDA capable GPU. This project was developed using Linux.

Create a new conda environment and activate it:

conda create -n KeypToLandEnv python=3.10
conda activate KeypToLandEnv

Install pythorch and the faiss library:

conda install pytorch torchvision pytorch-cuda=11.7 -c pytorch -c nvidia
conda install faiss-gpu pytorch pytorch-cuda -c pytorch -c nvidia

Install other external dependencies using pip and create the results directory.

pip install -r requirements.txt 
mkdir Results

Our method is bootstraped by Superpoint. Download weights for a pretrained Superpoint model from here.

Before code execution you have to update paths/main.yaml so it includes all the required paths. Edit the following entries in paths/main.yaml.:

CelebA_datapath: <pathToCelebA_database>/celeb/Img/img_align_celeba_hq/
300WLP_datapath: <pathTo300WLP_database>/300W_LP/
LS3Dbalanced_datapath: <pathToLS3Dbalanced_database>/new_dataset/
path_to_superpoint_checkpoint: <pathToSuperPointCheckPoint>/superpoint_v1.pth

Testing

Stage 1

To evaluate the first stage of the algorithm execute:

python eval.py --experiment_name <experiment_name> --dataset_name <dataset_name>  --K <K> --stage 1

The last checkpoint stored on _Results/<experiment_name>/CheckPoints/_ will be loaded automatically. To you want to evaluate a particular checkpoint or pretrained model use the path_to_checkpoint argument.

Stage 2

Similarly, to To evaluate the second stage of the algorithm execute:

python eval.py --experiment_name <experiment_name> --dataset_name <dataset_name>  --K <K> --stage 2

Cumulative forward and backward error curves will be stored in _Results/<experiment_name>/Logs/_ .

Training

To execute the first step of our method please run:

python Train_Step1.py --dataset_name <dataset_name> --experiment_name <experiment_name> --K <K>

Similarly, to execute the second step please run:

python Train_Step2.py --dataset_name <dataset_name> --experiment_name <experiment_name> --K <K>

where < dataset_name > is in "CelebA","LS3D" and < experiment_name > is a custom name you choose for each experiment. Please use the same experiment name for both the first and second step. The software will automatically initiate the second step with the groundtruth discovered in step one.

Pretrained Models

We provide also pretrained models. Can be used to execute the testing script and produce visual results.

Dataset K Stage Model Dataset K Stage Model
CelebA 10 1 link LS3D 10 1 link
CelebA 10 2 link LS3D 10 2 link
CelebA 30 1 link LS3D 30 1 link
CelebA 30 2 link LS3D 30 2 link

Citation

If you found this work useful consider citing:

@ARTICLE{10005822,
  author={Mallis, Dimitrios and Sanchez, Enrique and Bell, Matt and Tzimiropoulos, Georgios},
  journal={IEEE Transactions on Pattern Analysis and Machine Intelligence}, 
  title={From Keypoints to Object Landmarks via Self-Training Correspondence: A Novel Approach to Unsupervised Landmark Discovery}, 
  year={2023},
  volume={45},
  number={7},
  pages={8390-8404},
  doi={10.1109/TPAMI.2023.3234212}}

About

Code for our TPAMI paper "From Keypoints to Object Landmarks via Self-Training Correspondence: A novel approach to Unsupervised Landmark Discovery."

Topics

Resources

License

Stars

Watchers

Forks

Releases

No releases published

Packages

No packages published

Languages