Skip to content

SpaceView/Yolact_EfficientNet

Folders and files

NameName
Last commit message
Last commit date

Latest commit

 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 

Repository files navigation

This is a copy from https://github.com/dbolya/yolact

EfficientNet as backbone

EfficientNet backbone is from https://github.com/zylo117/Yet-Another-EfficientDet-Pytorch

Introduction to these changes: https://spacevision.blog.csdn.net/article/details/120765185

Lots of changes are made to cope with the EfficientNet backbone.

Just change the following few lines if you want to use EfficientNet backbone

# ----------------------- Default Config ----------------------- #
# Default config
cfg = yolact_base_config.copy()
#cfg = yolact_EfficientNet_config.copy()

You can test your training results with 720degree.mp4, as uploaded.

By the way, the training speed is much slower, even than Resnet50. I suppose some superparameters have to be updated.

If you can achieve a much high speed, please let me know.

Use pretrained efficientnet model (使用预训练的模型)

You can load the pretrained weights from: efficientnet-b0.pth

If you want to use the pretrained weights, set the path to the correct one so that YOLACT can find it,

efficientnet_backbone = backbone_base.copy({
...
    # set it to anything doesn't exist if you don't want to use pretrained weights.
    'path': 'efficientnet-b0-yolact-abridged.pth',  
...
})

目前我还没时间测试更多结果,只是给出了一个b0的例子,加载模型成功,但没有训练成功。

从训练结果看,貌似没看到这个预加载的权重文件有什么好处。

在原来的代码中,这里 [Cannot download backbone weights ]提到,不能使用原来efficientnet的权重文件,例如efficientnet-b0-355c32eb.pth的权重文件。

因此,我的办法是,将Yolact运行一下,比如我用了100个iteration,然后把这个文件保存下来,也就是 yolact_EfficientNet_0_100.pth,然后,用这个保存下来的文件中的keys,替换掉原来efficientnet-b0-355c32eb.pth 中的keys,生成一个新的 efficientnet-b0-yolact 的权重文件,这样,就可以直接加载 efficientnet-b0-yolact 到模型中初始化这个Backbone的权重了。

具体如何替换请参考:match_and_rename_efficientnet_dict_keys.py

我在替换时还生成了2个keys的文本文件,如下,大家可以打开比较一下,具体差别在哪里。 efficientnet-b0_keys.txt yolact_EfficientNet_bacbone_keys.txt

VGG16 as backbone

Though the author said he has tried VGG16, but I don't see how this is configured; and, I didn't make a success using the author's configuration. Thus, I have to config VGG16 according to my understanding.

You can use VGG16 configuration by set the following codes,

cfg = vgg16_config.copy()

Frankly speaking, I don't see any reason to use this backbone, it is slow both in train and evaluation.

====================================================================

====================================================================

You Only Look At CoefficienTs

    ██╗   ██╗ ██████╗ ██╗      █████╗  ██████╗████████╗
    ╚██╗ ██╔╝██╔═══██╗██║     ██╔══██╗██╔════╝╚══██╔══╝
     ╚████╔╝ ██║   ██║██║     ███████║██║        ██║   
      ╚██╔╝  ██║   ██║██║     ██╔══██║██║        ██║   
       ██║   ╚██████╔╝███████╗██║  ██║╚██████╗   ██║   
       ╚═╝    ╚═════╝ ╚══════╝╚═╝  ╚═╝ ╚═════╝   ╚═╝ 

A simple, fully convolutional model for real-time instance segmentation. This is the code for our papers:

YOLACT++ (v1.2) released! (Changelog)

YOLACT++'s resnet50 model runs at 33.5 fps on a Titan Xp and achieves 34.1 mAP on COCO's test-dev (check out our journal paper here).

In order to use YOLACT++, make sure you compile the DCNv2 code. (See Installation)

For a real-time demo, check out our ICCV video:

IMAGE ALT TEXT HERE

Some examples from our YOLACT base model (33.5 fps on a Titan Xp and 29.8 mAP on COCO's test-dev):

Example 0

Example 1

Example 2

Installation

  • Set up a Python3 environment.
  • Install Pytorch 1.0.1 (or higher) and TorchVision.
  • Install some other packages:
    # Cython needs to be installed before pycocotools
    pip install cython
    pip install opencv-python pillow pycocotools matplotlib 
  • Clone this repository and enter it:
    git clone https://github.com/dbolya/yolact.git
    cd yolact
  • If you'd like to train YOLACT, download the COCO dataset and the 2014/2017 annotations. Note that this script will take a while and dump 21gb of files into ./data/coco.
    sh data/scripts/COCO.sh
  • If you'd like to evaluate YOLACT on test-dev, download test-dev with this script.
    sh data/scripts/COCO_test.sh
  • If you want to use YOLACT++, compile deformable convolutional layers (from DCNv2). Make sure you have the latest CUDA toolkit installed from NVidia's Website.
    cd external/DCNv2
    python setup.py build develop

Evaluation

Here are our YOLACT models (released on April 5th, 2019) along with their FPS on a Titan Xp and mAP on test-dev:

Image Size Backbone FPS mAP Weights
550 Resnet50-FPN 42.5 28.2 yolact_resnet50_54_800000.pth Mirror
550 Darknet53-FPN 40.0 28.7 yolact_darknet53_54_800000.pth Mirror
550 Resnet101-FPN 33.5 29.8 yolact_base_54_800000.pth Mirror
700 Resnet101-FPN 23.6 31.2 yolact_im700_54_800000.pth Mirror

YOLACT++ models (released on December 16th, 2019):

Image Size Backbone FPS mAP Weights
550 Resnet50-FPN 33.5 34.1 yolact_plus_resnet50_54_800000.pth Mirror
550 Resnet101-FPN 27.3 34.6 yolact_plus_base_54_800000.pth Mirror

To evalute the model, put the corresponding weights file in the ./weights directory and run one of the following commands. The name of each config is everything before the numbers in the file name (e.g., yolact_base for yolact_base_54_800000.pth).

Quantitative Results on COCO

# Quantitatively evaluate a trained model on the entire validation set. Make sure you have COCO downloaded as above.
# This should get 29.92 validation mask mAP last time I checked.
python eval.py --trained_model=weights/yolact_base_54_800000.pth

# Output a COCOEval json to submit to the website or to use the run_coco_eval.py script.
# This command will create './results/bbox_detections.json' and './results/mask_detections.json' for detection and instance segmentation respectively.
python eval.py --trained_model=weights/yolact_base_54_800000.pth --output_coco_json

# You can run COCOEval on the files created in the previous command. The performance should match my implementation in eval.py.
python run_coco_eval.py

# To output a coco json file for test-dev, make sure you have test-dev downloaded from above and go
python eval.py --trained_model=weights/yolact_base_54_800000.pth --output_coco_json --dataset=coco2017_testdev_dataset

Qualitative Results on COCO

# Display qualitative results on COCO. From here on I'll use a confidence threshold of 0.15.
python eval.py --trained_model=weights/yolact_base_54_800000.pth --score_threshold=0.15 --top_k=15 --display

Benchmarking on COCO

# Run just the raw model on the first 1k images of the validation set
python eval.py --trained_model=weights/yolact_base_54_800000.pth --benchmark --max_images=1000

Images

# Display qualitative results on the specified image.
python eval.py --trained_model=weights/yolact_base_54_800000.pth --score_threshold=0.15 --top_k=15 --image=my_image.png

# Process an image and save it to another file.
python eval.py --trained_model=weights/yolact_base_54_800000.pth --score_threshold=0.15 --top_k=15 --image=input_image.png:output_image.png

# Process a whole folder of images.
python eval.py --trained_model=weights/yolact_base_54_800000.pth --score_threshold=0.15 --top_k=15 --images=path/to/input/folder:path/to/output/folder

Video

# Display a video in real-time. "--video_multiframe" will process that many frames at once for improved performance.
# If you want, use "--display_fps" to draw the FPS directly on the frame.
python eval.py --trained_model=weights/yolact_base_54_800000.pth --score_threshold=0.15 --top_k=15 --video_multiframe=4 --video=my_video.mp4

# Display a webcam feed in real-time. If you have multiple webcams pass the index of the webcam you want instead of 0.
python eval.py --trained_model=weights/yolact_base_54_800000.pth --score_threshold=0.15 --top_k=15 --video_multiframe=4 --video=0

# Process a video and save it to another file. This uses the same pipeline as the ones above now, so it's fast!
python eval.py --trained_model=weights/yolact_base_54_800000.pth --score_threshold=0.15 --top_k=15 --video_multiframe=4 --video=input_video.mp4:output_video.mp4

As you can tell, eval.py can do a ton of stuff. Run the --help command to see everything it can do.

python eval.py --help

Training

By default, we train on COCO. Make sure to download the entire dataset using the commands above.

  • To train, grab an imagenet-pretrained model and put it in ./weights.
    • For Resnet101, download resnet101_reducedfc.pth from here.
    • For Resnet50, download resnet50-19c8e357.pth from here.
    • For Darknet53, download darknet53.pth from here.
  • Run one of the training commands below.
    • Note that you can press ctrl+c while training and it will save an *_interrupt.pth file at the current iteration.
    • All weights are saved in the ./weights directory by default with the file name <config>_<epoch>_<iter>.pth.
# Trains using the base config with a batch size of 8 (the default).
python train.py --config=yolact_base_config

# Trains yolact_base_config with a batch_size of 5. For the 550px models, 1 batch takes up around 1.5 gigs of VRAM, so specify accordingly.
python train.py --config=yolact_base_config --batch_size=5

# Resume training yolact_base with a specific weight file and start from the iteration specified in the weight file's name.
python train.py --config=yolact_base_config --resume=weights/yolact_base_10_32100.pth --start_iter=-1

# Use the help option to see a description of all available command line arguments
python train.py --help

Multi-GPU Support

YOLACT now supports multiple GPUs seamlessly during training:

  • Before running any of the scripts, run: export CUDA_VISIBLE_DEVICES=[gpus]
    • Where you should replace [gpus] with a comma separated list of the index of each GPU you want to use (e.g., 0,1,2,3).
    • You should still do this if only using 1 GPU.
    • You can check the indices of your GPUs with nvidia-smi.
  • Then, simply set the batch size to 8*num_gpus with the training commands above. The training script will automatically scale the hyperparameters to the right values.
    • If you have memory to spare you can increase the batch size further, but keep it a multiple of the number of GPUs you're using.
    • If you want to allocate the images per GPU specific for different GPUs, you can use --batch_alloc=[alloc] where [alloc] is a comma seprated list containing the number of images on each GPU. This must sum to batch_size.

Logging

YOLACT now logs training and validation information by default. You can disable this with --no_log. A guide on how to visualize these logs is coming soon, but now you can look at LogVizualizer in utils/logger.py for help.

Pascal SBD

We also include a config for training on Pascal SBD annotations (for rapid experimentation or comparing with other methods). To train on Pascal SBD, proceed with the following steps:

  1. Download the dataset from here. It's the first link in the top "Overview" section (and the file is called benchmark.tgz).
  2. Extract the dataset somewhere. In the dataset there should be a folder called dataset/img. Create the directory ./data/sbd (where . is YOLACT's root) and copy dataset/img to ./data/sbd/img.
  3. Download the COCO-style annotations from here.
  4. Extract the annotations into ./data/sbd/.
  5. Now you can train using --config=yolact_resnet50_pascal_config. Check that config to see how to extend it to other models.

I will automate this all with a script soon, don't worry. Also, if you want the script I used to convert the annotations, I put it in ./scripts/convert_sbd.py, but you'll have to check how it works to be able to use it because I don't actually remember at this point.

If you want to verify our results, you can download our yolact_resnet50_pascal_config weights from here. This model should get 72.3 mask AP_50 and 56.2 mask AP_70. Note that the "all" AP isn't the same as the "vol" AP reported in others papers for pascal (they use an averages of the thresholds from 0.1 - 0.9 in increments of 0.1 instead of what COCO uses).

Custom Datasets

You can also train on your own dataset by following these steps:

  • Create a COCO-style Object Detection JSON annotation file for your dataset. The specification for this can be found here. Note that we don't use some fields, so the following may be omitted:
    • info
    • liscense
    • Under image: license, flickr_url, coco_url, date_captured
    • categories (we use our own format for categories, see below)
  • Create a definition for your dataset under dataset_base in data/config.py (see the comments in dataset_base for an explanation of each field):
my_custom_dataset = dataset_base.copy({
    'name': 'My Dataset',

    'train_images': 'path_to_training_images',
    'train_info':   'path_to_training_annotation',

    'valid_images': 'path_to_validation_images',
    'valid_info':   'path_to_validation_annotation',

    'has_gt': True,
    'class_names': ('my_class_id_1', 'my_class_id_2', 'my_class_id_3', ...)
})
  • A couple things to note:
    • Class IDs in the annotation file should start at 1 and increase sequentially on the order of class_names. If this isn't the case for your annotation file (like in COCO), see the field label_map in dataset_base.
    • If you do not want to create a validation split, use the same image path and annotations file for validation. By default (see python train.py --help), train.py will output validation mAP for the first 5000 images in the dataset every 2 epochs.
  • Finally, in yolact_base_config in the same file, change the value for 'dataset' to 'my_custom_dataset' or whatever you named the config object above. Then you can use any of the training commands in the previous section.

Creating a Custom Dataset from Scratch

See this nice post by @Amit12690 for tips on how to annotate a custom dataset and prepare it for use with YOLACT.

Citation

If you use YOLACT or this code base in your work, please cite

@inproceedings{yolact-iccv2019,
  author    = {Daniel Bolya and Chong Zhou and Fanyi Xiao and Yong Jae Lee},
  title     = {YOLACT: {Real-time} Instance Segmentation},
  booktitle = {ICCV},
  year      = {2019},
}

For YOLACT++, please cite

@misc{yolact-plus-arxiv2019,
  title         = {YOLACT++: Better Real-time Instance Segmentation},
  author        = {Daniel Bolya and Chong Zhou and Fanyi Xiao and Yong Jae Lee},
  year          = {2019},
  eprint        = {1912.06218},
  archivePrefix = {arXiv},
  primaryClass  = {cs.CV}
}

Contact

For questions about our paper or code, please contact Daniel Bolya.

About

Yolact with EfficientNet backbone

Resources

License

Stars

Watchers

Forks

Releases

No releases published

Packages

No packages published