Skip to content
forked from irfanICMLL/SSIW

The code of 'The devil is in the labels: Semantic segmentation from sentences'.

Notifications You must be signed in to change notification settings

Bartolo1024/SSIW

 
 

Folders and files

NameName
Last commit message
Last commit date

Latest commit

 

History

23 Commits
 
 
 
 
 
 
 
 
 
 

Repository files navigation

About

CMP dataset finetuning based on the repository.

Run training

/bin/bash download.sh
conda env create -f environment.yaml
conda activate ssiw
python -m src.tools.train --max-epochs 100

Test

  • train your checkpoint (by default saved as out.pth)
  • use my checkpoint
python -m src.tools.test_cpm data/base/base/cmp_b0346.jpg --checkpoint-path checkpoint.pth

Annotated

Summary

I have implemented CMP dataset loading and model training. Base repository contains only testing stage. My train script loads the weights shared in the authors repository and fine tune it with CMP dataset. I used only base version of dataset (extended is for testing).

Labels in the CMP dateset seems to be precise, so I have implemented only L_hd loss equations 1, 2 and 3 in the paper. Due to the deadline I have frozen the temperature parameter in the loss. I think that learnable parameters in the loss could slow down the development process, but it is worth to replace it with learnable parameter and check performance.

Typically large models overfit on small datasets, but in that case I have tried train only head (strong regularization) and it decreased the performance. It is definitely worth to check more augmentations, hyperparameters and longer training time. It can be performed in next steps.

Batch size and image crop size are adapted to my VRAM resources, not the model performance.

About

The code of 'The devil is in the labels: Semantic segmentation from sentences'.

Resources

Stars

Watchers

Forks

Releases

No releases published

Packages

No packages published

Languages

  • Python 98.5%
  • Shell 1.5%