Skip to content

NDS-VU/GNN-SSL-chapter

 
 

Folders and files

NameName
Last commit message
Last commit date

Latest commit

 

History

45 Commits
 
 
 
 
 
 

Repository files navigation

A collection of SSL pretext tasks used in GNNs

Categorization and summarization of the state-of-the-art Self-supervised Learning (SSL) on Graph Neural Networks (GNNs).

For more details, please refer to our book chapter (link coming soon). If you find this useful and use it in your research, please temporarily cite our work as follows:

@inproceedings{wang2021gnnssl,
 author = {Wang, Yu and Jin, Wei and Derr, Tyler},
 title = {Graph Neural Networks: Self-supervised Learning},
 year = {2021}
}

Contents

Categorization

Pretext tasks are constructed by leveraging different types of supervision information coming from different components of graphs. Based on the components that generate the supervision information, pretext tasks that are prevalent in the literature are categorized into node-level, graph-level and node-graph level. In completing node-level and graph-level pretext tasks, three types of information can be leveraged: graph structure, node features, or hybrid, where the latter combines the information from node features, graph structure, and even information from the known training labels. We summarize the categorization of pretext tasks as a tree where each leaf node represents a specific type of pretext tasks in the following figure while also including the corresponding references. category_tree

Summarization

Node-level

For node-level pretext tasks, methods have been developed to use easily-accessible data to generate pseudo labels for each node or relationships for each pair of nodes. In this way, the GNNs are then trained to be predictive of the generated pseudo labels or to keep the equivalence between the node embeddings and the original node relationships.

Node-structure-based

Different nodes have different structure properties in graph topology, which can be measured by the node degree, centrality, node partition, etc. Thus, for structure-based pretext tasks at the node-level, we expect to align node embeddings extracted from the GNNs with their structure properties, in an attempt to ensure this information is preserved while GNNs learn the node embeddings.

Node-feature-based

Node features are another important information that can be leveraged to provide extra supervision.

Node-hybrid

Instead of employing only the topology or only the feature information as the extra supervision, some pretext tasks combine them together as a hybrid supervision, or even utilize information from the known training labels.

Graph-level

After having just presented the node-level SSL pretext tasks, in this section we focus on the graph-level SSL pretext tasks where we desire the node embeddings coming from the GNNs to encode information of graph-level properties.

Graph-structure-based

Graph-feature-based

Typically, graphs does not come with any feature information and here the graph-level features refer to the graph embeddings obtained after applying a pooling layer on all node embeddings from GNNs.

Graph-hybrid

Node-graph-level

All the above pretext tasks are designed based on either the node or the graph level supervision. However, there is another final line of research combining these two sources of supervision to design pretext tasks, which we summarize in this section.

Application and other work

Reference and Acknowledge

For an up to date list of SSL GNN papers please refer to awesome-self-supervised-gnn.

About

No description, website, or topics provided.

Resources

Stars

Watchers

Forks

Releases

No releases published

Packages

No packages published