Skip to content

Code for the ECCV22 paper Demystifying Unsupervised Semantic Correspondence Estimation

Notifications You must be signed in to change notification settings


Folders and files

Last commit message
Last commit date

Latest commit



2 Commits

Repository files navigation

Demistfying Unsupervised Semantic Correspondence Estimation



git clone
cd demistfy_correspondence
conda env create -f environment.yml
conda activate demistfy


You can download the datasets that we consider in the paper from respective links: Spair, CUB, Stanforddogs, Awa-Pose, AFLW.

For evaluation, you need copy put random_pair.txt file from pairs directory to respective dataset folder. Final folder structure should be like below;

└── .....
└── random_pair.txt


To evaluate a method, you need to run from respective dataset folder in the evaluation directory.

For instance to evaluate None projection (original embeddings)

python --layer 3 --logpath None

This will create a log file under logs directory, and in the end of the file you can find the metrics that we described in the paper.

To evaluate a finetuned unsupervised method you need to give use --projpath argument, and if you want to test another type of architecture like transofmer you need to give --model_type argument.

model_types are: resnet50, dino, vit

For instance to evalute asym using unsupervised transformer as a backbone for the CUB dataset, you should go to evalution/cub directory and run :

python --model_type dino --layer 3 --projpath path_to_cub_asym_projection --logpath dino_asym_cub


To train a projection layer using unsupervised losses (eq,dve,lead,cl,asym) go to projection directory and run script. You need to give datapath with --datapath argument, dataset with --dataset which can be : spair, stanforddogs, cub, awa, aflw To use unsupervised losses give as single argument without any parameter: for example --eq for training projection with EQ loss(other options are cl, asym, dve, lead).

If you do not give any parameter for unsupervised loses, the scripts trains projection with keypoint supervision.

You can also set batchsize, layers, weightdecay and model_type or initial weights for the backbone with model_weights argument.

For instance, to train a projection layer using ASYM loss on top of supervised CNN for the Stanforddogs dataset:

python  --layer 3 --batchsize 8 --asym --dataset stanforddogs --datapath path_to_stanforddogs_dataset --logpath path_for_log

Trained Models

You can download the pretrained models from here

New PCK Metric and Detailed Analysis


If you want to use our proposed version of PCK and detailed analysis the function is implemented here.


If you find the code useful in your research, please consider citing:

    author    = {Aygun, Mehmet and Mac Aodha, Oisin},
    title     = {Demystifying Unsupervised Semantic Correspondence Estimation},
    booktitle = {European Conference on Computer Vision (ECCV)},
    year      = {2022}


We used partial code snippets from hpf, MiDaS and dino-vit-features.