Skip to content

HuangDLab/EnFormer

Folders and files

NameName
Last commit message
Last commit date

Latest commit

 

History

2 Commits
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 

Repository files navigation

EnFormer

This project is developed based on the timm framework https://github.com/huggingface/pytorch-image-models specifically for Polyp Segmentation.

Paper: https://arxiv.org/abs/2408.07262

Backbone weight

Pretrained weight for transformer backbones can be downloaded via:

wget https://github.com/whai362/PVT/releases/download/v2/pvt_v2_b3.pth
wget https://vcl.ucsd.edu/coat/pretrained/coat_lite_mini_6b4a8ae5.pth
wget https://vcl.ucsd.edu/coat/pretrained/coat_lite_small_8d362f48.pth
wget https://vcl.ucsd.edu/coat/pretrained/coat_lite_medium_384x384_f9129688.pth

Data

Your data format should be as follows:

-------------------
[your dataset root]
-- train /
---- images /
------ image1.jpg
---- masks /
------ mask1.jpg
-- val /
---- images /
------ image1.jpg
---- masks /
------ mask1.jpg
-- test /
---- images /
------ image1.jpg
---- masks /
------ mask1.jpg
-------------------
  • [your dataset root] is the root directory of your data, which needs to be modified in data-dir in args
  • Data is divided into three folders: train/val/test. The names do not have to be unique, but you must specify them in train-split/val-split/test-split in args according to your settings
  • images is used to store all images, and masks is used to store all segmentation labels. The folder names are fixed and cannot be changed arbitrarily. Also, the image names in the images folder (e.g., images/something.jpg) should correspond to the segmentation labels in the masks folder (e.g., masks/something.jpg)

Training and testing dataset can be downloaded from:

This dataset already includes a testing set, so we split the TrainDataset into train/val, and copy Kvasir from the TestDataset into TrainDataset, renaming it to test.

Your directory structure should be:

-------------------
TrainDataset /
-- train
-- val
-- test
TestDataset /
-- CVC-300
-- CVC-ClinicDB
-- CVC-ColonDB
-- ETIS-LaribPolypDB
-- Kvasir
-------------------

And the settings in args should be:

--data-dir [path to TrainDataset]/
--train-split train
--val-split val
--test-split test

Usage

pip install -r requirements.txt
bash train.sh

Metrics calculation

Please refer to https://github.com/DengPingFan/PraNet

Pre-computed maps

[Google drive]

Pre-trained weight

Pre-trained weight can be downloaded via: [Google drive]

About

EnFormer architecture for Polyp-Segmentation

Resources

Stars

Watchers

Forks

Releases

No releases published

Packages

No packages published