Skip to content

Latest commit

 

History

History
49 lines (34 loc) · 4.89 KB

README.md

File metadata and controls

49 lines (34 loc) · 4.89 KB

DIM (CVPR'2017)

DIM (CVPR'2017)
@inproceedings{xu2017deep,
  title={Deep image matting},
  author={Xu, Ning and Price, Brian and Cohen, Scott and Huang, Thomas},
  booktitle={Proceedings of the IEEE Conference on Computer Vision and Pattern Recognition},
  pages={2970--2979},
  year={2017}
}

Abstract

Image matting is a fundamental computer vision problem and has many applications. Previous algorithms have poor performance when an image has similar foreground and background colors or complicated textures. The main reasons are prior methods 1) only use low-level features and 2) lack high-level context. In this paper, we propose a novel deep learning based algorithm that can tackle both these problems. Our deep model has two parts. The first part is a deep convolutional encoder-decoder network that takes an image and the corresponding trimap as inputs and predict the alpha matte of the image. The second part is a small convolutional network that refines the alpha matte predictions of the first network to have more accurate alpha values and sharper edges. In addition, we also create a large-scale image matting dataset including 49300 training images and 1000 testing images. We evaluate our algorithm on the image matting benchmark, our testing set, and a wide variety of real images. Experimental results clearly demonstrate the superiority of our algorithm over previous methods.

Results

Method SAD MSE GRAD CONN Download
stage1 (paper) 54.6 0.017 36.7 55.3 -
stage3 (paper) 50.4 0.014 31.0 50.8 -
stage1 (our) 53.8 0.017 32.7 54.5 model | log
stage2 (our) 52.3 0.016 29.4 52.4 model | log
stage3 (our) 50.6 0.015 29.0 50.7 model | log

NOTE

  • stage1: train the encoder-decoder part without the refinement part. \
  • stage2: fix the encoder-decoder part and train the refinement part. \
  • stage3: fine-tune the whole network.

The performance of the model is not stable during the training. Thus, the reported performance is not from the last checkpoint. Instead, it is the best performance of all validations during training.

The performance of training (best performance) with different random seeds diverges in a large range. You may need to run several experiments for each setting to obtain the above performance.