Paper here --> https://arxiv.org/abs/1609.02907
GCN is a very important paper in modern GNN. The objective of this paper is to generalize convolution to graphs. Several existing methods were based on the spectral decomposition of the Laplacian matrix of a graph to perform convolutions. However, these methods suffer from a high computational time (the eigenvector decomposition is expensive) and the slightest change in the graph makes the decomposition change and disturbs the learning and the predictions. GCN aims at simplifying these spectral methods and proposes a simpler, more efficient and inductive architecture for convolution on graphs.
See this blog to understand the motivation of graph convolution https://distill.pub/2021/understanding-gnns/
The GCN architecture is based on a very comprehensive matrix multiplication, an exemple here with a 2 layer GCN :
$ Z=f(X, A)=\operatorname{softmax}\left(\hat{A} \operatorname{ReLU}\left(\hat{A} X W^{(0)}\right) W^{(1)}\right) $
where $ \hat{A}=\tilde{D}^{-\frac{1}{2}} \tilde{A} \tilde{D}^{-\frac{1}{2}} $ and
In the GCN folder I reimplement from scratch in pytorch a simple two layer GCN on the dataset Zachary Karate Club https://en.wikipedia.org/wiki/Zachary%27s_karate_club