Skip to content

indritnallbani/Res-V-GAE

Folders and files

NameName
Last commit message
Last commit date

Latest commit

 

History

14 Commits
 
 
 
 
 
 

Repository files navigation

Res(V)GAE: Going Deeper with Residual Connections

In this paper, we study the effects of adding residual connections to graph autoencoders with multiple graph convolutional layers and propose Residual (Variational) Graph Autoencoder (Res(V)GAE), a deep (variational) graph autoencoder model with multiple residual connections. We show that residual connections improve the average precision of the graph autoencoders when we increase the number of graph convolutional layers. Experimental results suggest that our proposed model with residual connections outperforms the models without residual connections for the link prediction task.

Finally, we have contributed twofold: firstly we study the effectiveness of adding residual connections to deep graph models and we introduce our own deep learning model called Res-VGAE. We have reported the results of models from one to eight residual connections on the link prediction task. The results show that we are able to increase the accuracy results using AP and AUC metrics when compared with other similar models. The dataset used are Cora, Citeseer, and Pubmed.

Model Architecture

Model architecture of Res(V)GAE. Residual connections start after the first Hidden Layer (HL) since the input and the output size of layers with residual connections must be the same. The encoder takes the adjacency matrix A and the feature matrix X as inputs and outputs the node embeddings Z. The decoder takes as input the embedding matrix Z and outputs the reconstructed adjacency matrix Aˆ. The blocks in blue indicate the graph convolutional layers that embed 32-dimensional node feature vectors into a matrix. Similarly, yellow blocks constitute the graph convolutional layers that embed 16-dimensional hidden layer features into the output Z matrix. The upper and lower branches of the encoder represent variational graph autoencoder and graph autoencoder architectures, respectively.

res

Results

We use AP and AUC scores to report the average precision of 10 runs with random train, validation, and test splits of the same size, and all the models are trained for 200 epochs. The validation and test sets contain 5% and 10% of the total edges, respectively. The embedding dimensions of the node features for the hidden layers is 32 and the embedding dimension of node features for the output layer is 16. The models are optimized using Adam optimizer with a learning rate of 0.01.

Experiments indicate that, for shallow models, all proposed models achieve similar average precision scores. Here we see that all models with one layer embed the node features in a very similar way. The score differs when we use models with deeper networks. In models with eight layers, we see that models with residual connections have higher average precision scores than models without residual connections.

Screenshot from 2022-02-17 13-08-53

About

No description, website, or topics provided.

Resources

Stars

Watchers

Forks

Releases

No releases published

Packages

No packages published

Languages