Skip to content
Official implementation of paper "Learning Attraction Field Map for Robust Line Segment Detection" (CVPR 2019)
Branch: master
Clone or download
Latest commit 646c673 Mar 10, 2019
Type Name Latest commit message Commit time
Failed to load latest commit information.
config first commit Mar 5, 2019
data first commit Mar 5, 2019
dataset fixed a bug Mar 10, 2019
experiments added missing files Mar 6, 2019
figures update Mar 9, 2019
lib update Mar 5, 2019
modeling fixed a bug Mar 10, 2019
.gitignore first commit Mar 5, 2019 first commit Mar 5, 2019
LICENSE Initial commit Mar 5, 2019 update Mar 9, 2019
requirements.txt first commit Mar 5, 2019 first commit Mar 5, 2019 first commit Mar 5, 2019

Learning Attraction Field Reprensentation for Robust Line Segment Detection (accepted by CVPR 2019)

This is the offical implementation for our CVPR paper.


We reformulate the problem of line segment detection (LSD) as a coupled region coloring problem. Based on this new formulation, we can address the problem of LSD with convolutional neural networks.


F-Measure and FPS

Methods Wireframe Dataset YorkUrban Dataset FPS
LSD 0.647 0.591 19.6
MCMLSD 0.566 0.564 0.2
Linelet 0.644 0.585 0.14
Wireframe Parser 0.728 0.627 2.24
Ours (U-Net) 0.752 0.639 10.3
Ours (a-trous) 0.773 0.646 6.6

Precision and Recall Curves


Check for installation instructions.

1.Data preparation

1.1 Downloading data

Please follow the above links to download Wireframe and YorkUrban datasets. For Wireframe dataset, we only need the file named which contains images and line segment annotations for training and testing.

Once the files are downloaded, please unzip them into <AFM_root>/data/wireframe_raw and <AFM_root>/data/york_raw respectively. The structures of wireframe_raw and york_raw folder are as follows:

    - pointlines/*.pkl
    - train.txt
    - test.txt

    - filename0_rgb.png
    - filename0.mat
    - filename{N}_rgb.png
    - filename{N}.mat

1.2. Data Pre-processing

Please run the following commands

cd <AFM_root>/data/

2. Hyper-parameter configurations

We use the YACS to control the hyper parameters. Our configuration files for U-Net (afm_unet.yaml) and a-trous Residual Unet (afm_atrous.yaml) are saved in the "<AFM_root>/experiments" folder.

In each yaml file, the SAVE_DIR is used to store the network weights and experimental results. The weights are saved in SAVE_DIR/weights and the results are saved in SAVE_DIR/results/DATASET_name.

The TEST configuration is for outputing results in testing phase with different ways (e.g. save or display). We currently provide two output modes "display" and "save". You can custom more output methods in modeling/output/

3. Inference with pretrained models

The pretrained models for U-Net and atrous Residual U-Net can be downloaded from this link. Please place the weights into "<AFM_root>/experiments/unet/weight" and "<AFM_root>/experiments/atrous/weight" respectively.

  • For testing, please run the following command
python --config-file experiments/afm_atrous.yaml --gpu 0

4. Training

Please run the following command

python --config-file experiments/afm_atrous.yaml --gpu 0

to train a network. To speedup training procedure, our code will save the generated attraction field maps into <AFM_root>/data/wireframe/.cache when you run training code in the first time.

5. Citations

If you find our work useful in your research, please consider citing:

title = "Learning Attraction Field Representation for Robust Line Segment Detection",
author = "Nan Xue and Song Bai and Fudong Wang and  Gui-Song Xia and Tianfu Wu and Liangpei Zhang",
booktitle = "IEEE Conference on Computer Vision and Pattern Recognition (CVPR)",
year = {2019},
You can’t perform that action at this time.