Individual Project
Explore the docs »
View Demo
·
Report Bug
·
Request Feature
This project is code implementation of my final project report in KCL. In this repo, deep metric learning is applied to few shot segmentation. According to the information provided by the mask, the distance between feature vectors of the same category in the high-dimensional space is as small as possible, and the distance between feature vectors of different categories is close or far away.
-
dataloaders/
coco.py
,common.py
,customized.py
are the class of dataset and dataloaders.transforms.py
contains different transform functions.
-
models/
fewshot.py
is the class definition of train model, including model initialization and forward function.few_proto.py
is the class definition of test prototype model.vgg.py
is the class definition of VGG feature extractor.
-
util/
metric.py
is the metric functions to evaluate models.utils.py
contains some other functions used in the model training and testing.
-
experiments/ contains the training, testing and visualize scripts.
-
config.py
is the configuration file to set the training and testing parameters. -
train_metric.py
is the main function to start training the model. -
test_proto.py
andtest_metric_knn.py
are the main functions to start testing model. -
visualization.py
is the script to visualize testing result.
To get a local copy up and running follow these simple steps.
- Python 3.6+
- PyTorch 1.9.0
- pytorch-metric-learning
- segmentation_models_pytorch
- torchvision 0.2.1+
- pycocotools
- sacred 0.7.5
- tqdm 4.32.2
- Download the source code
- Install requirements
pip install -r requirements.txt
-
Prepare Pascal-5i dataset
-
Download VOC2012 from officail website and put them under ./data/Pascal/
-
Download SegmentationClassAug, SegmentationObjectAug, ScribbleAugAuto from here and put them under ./data/Pascal/VOCdevkit/VOC2012/
-
Download Segmentation from here and use it to replace VOCdevkit/VOC2012/ImageSets/Segmentation.
-
-
Prepare MSCOCO dataset
- Download COCO 2014 form officail website and put them under ./data/COCO/
-
Prepare pretrain model
- Download pretrained model here and put them under ./
-
Train the model
sh experiments/train.sh
-
Test the model
sh experiments/test.sh
-
Visualize
sh experiments/vis.sh
See the open issues for a list of proposed features (and known issues).
Contributions are what make the open source community such an amazing place to learn, inspire, and create. Any contributions you make are greatly appreciated.
- Fork the Project
- Create your Feature Branch (
git checkout -b feature/AmazingFeature
) - Commit your Changes (
git commit -m 'Add some AmazingFeature'
) - Push to the Branch (
git push origin feature/AmazingFeature
) - Open a Pull Request
Distributed under the MIT License. See LICENSE
for more information.
Project Link: https://github.com/xumengen/FSGM