Skip to content

Latest commit

 

History

History

ogb

Folders and files

NameName
Last commit message
Last commit date

parent directory

..
 
 
 
 
 
 
 
 
 
 
 
 
 
 

In this work, we propose a novel Generalized Aggregation Function suited for graph convolutions. We show how our function covers all commonly used aggregations. Our generalized aggregation function is fully differentiable and can also be learned in an end-to-end fashion. We also show how by modifying current GCN skip connections and introducing a novel message normalization layer, we can enhance the performance in several benchmarks. Through combining our generalized aggregations, modified skip connections, and message normalization, we achieve state-of-the-art (SOTA) performance on four Open Graph Benchmark (OGB) datasets. [paper]

Overview

The definition of generalized message aggregation functions help us to find a family of differentiable permutation invariant aggregators. In order to cover the Mean and Max aggregations into the function space, we propose two variants of generalized mean-max aggregation functions, SoftMax_Aggβ(.) and PowerMean_Aggp(.). They can also be instantiated as a Min aggregator as β or p goes to −∞.

DyResGEN

Learning curves of 7-layer DyResGEN with SoftMax_Aggβ(.) and MsgNorm.

Learning curves of 7-layer DyResGEN with PowerMean_Aggp(.) and MsgNorm.

Results on OGB Datasets

Dataset Test
ogbn-products 0.8098 ± 0.0020
ogbn-proteins 0.8580 ± 0.0017
ogbn-arxiv 0.7192 ± 0.0016
ogbg-molhiv 0.7858 ± 0.0117
ogbg-molpcba 0.2745 ± 0.0025
ogbg-ppa 0.7712 ± 0.0071

Requirements

Install enviroment by runing:

source deepgcn_env_install.sh

Please cite our paper if you find anything helpful,

@misc{li2020deepergcn,
    title={DeeperGCN: All You Need to Train Deeper GCNs},
    author={Guohao Li and Chenxin Xiong and Ali Thabet and Bernard Ghanem},
    year={2020},
    eprint={2006.07739},
    archivePrefix={arXiv},
    primaryClass={cs.LG}
}