Skip to content
Tensorflow Repo for "DeepGCNs: Can GCNs Go as Deep as CNNs?" ICCV2019 Oral
Python Shell Jupyter Notebook
Branch: master
Clone or download
Latest commit 51e06c8 Aug 20, 2019
Permalink
Type Name Latest commit message Commit time
Failed to load latest commit information.
data/Stanford3dDataset_v1.2_Aligned_Version Added dataset fix Aug 14, 2019
gcn_lib Initial Commit May 27, 2019
misc Initial Commit May 27, 2019
sem_seg Minor update Aug 20, 2019
utils Define xrange() for Python 3 Aug 3, 2019
.gitignore Update gitignore Aug 20, 2019
LICENSE Initial commit Apr 5, 2019
README.md Minor update Aug 20, 2019
environment.yml Add conda environment Aug 20, 2019

README.md

DeepGCNs: Can GCNs Go as Deep as CNNs?

In this work, we present new ways to successfully train very deep GCNs. We borrow concepts from CNNs, mainly residual/dense connections and dilated convolutions, and adapt them to GCN architectures. Through extensive experiments, we show the positive effect of these deep GCN frameworks.

[Project] [Paper] [Slides] [Tensorflow Code] [Pytorch Code]

Overview

We do extensive experiments to show how different components (#Layers, #Filters, #Nearest Neighbors, Dilation, etc.) effect DeepGCNs. We also provide ablation studies on different type of Deep GCNs (MRGCN, EdgeConv, GraphSage and GIN).

Further information and details please contact Guohao Li and Matthias Müller.

Requirements

Conda Environment

In order to setup a conda environment with all neccessary dependencies run,

conda env create -f environment.yml

Getting Started

You will find detailed instructions how to use our code for semantic segmentation of 3D point clouds, in the folder sem_seg. Currently, we provide the following:

  • Conda environment
  • Setup of S3DIS Dataset
  • Training code
  • Evaluation code
  • Several pretrained models
  • Visualization code

Citation

Please cite our paper if you find anything helpful,

@misc{li2019gcns,
    title={Can GCNs Go as Deep as CNNs?},
    author={Guohao Li and Matthias Müller and Ali Thabet and Bernard Ghanem},
    year={2019},
    eprint={1904.03751},
    archivePrefix={arXiv},
    primaryClass={cs.CV}
}

License

MIT License

Acknowledgement

This code is heavily borrowed from PointNet and EdgeConv. We would also like to thank 3d-semantic-segmentation for the visualization code.

You can’t perform that action at this time.