Skip to content

The implementation of geometry-aware mesh convolutional network

Notifications You must be signed in to change notification settings

astaka-pe/MeshConvNet

Folders and files

NameName
Last commit message
Last commit date

Latest commit

 

History

16 Commits
 
 
 
 
 
 
 
 
 
 
 
 

Repository files navigation

[English] [日本語]

This page is under construction...

Mesh Convolutional Network

The implementation of Geometry-aware Mesh Convolutional Network based on Semi-Supervised Classification with Graph Convolutional Networks [ICLR 2017].

Environment

torch 1.7.0
torch-geometric 1.7.1 
scipy 1.6.2
numpy 1.19.2

Usage

from util.layer import MeshConv

class MeshNet(nn.Module):
    def __init__(self, mesh):
        super(MeshNet, self).__init__()
        self.model = nn.Sequential(
            MeshConv(6, 32, mesh),
            nn.BatchNorm1d(32),
            nn.LeakyReLU(),
            MeshConv(32, 128, mesh),
            nn.BatchNorm1d(128),
            nn.LeakyReLU(),
            MeshConv(128, 128, mesh),
            nn.BatchNorm1d(128),
            nn.LeakyReLU(),
            MeshConv(128, 32, mesh),
            nn.BatchNorm1d(32),
            nn.LeakyReLU(),
            MeshConv(32, 16, mesh),
            nn.BatchNorm1d(16),
            nn.LeakyReLU(),
            nn.Linear(16, 3),
        )

    def forward(self, x):
        out = self.model(x)
        return out

Review of GCNConv [ICLR2017]

Graph Convolutional Layer

$$ \begin{align} f(A,X;W) =& \sigma((I_N + D^{-\frac{1}{2}} A D^{-\frac{1}{2}}) X W)\\ % =& \sigma((2I_N - \hat{L}) X W)\\ \rightarrow& \sigma(\hat{D}^{-\frac{1}{2}} \hat{A} \hat{D}^{-\frac{1}{2}} X W) \end{align} $$

Variable

  • $A \in {0, 1}^{n\times n} $: Adjacency matrix
  • $D \in \mathbb{R}^{n \times n}$: Degree matrix
  • $L \in \mathbb{R}^{n \times n}$: Graph Laplacian matrix
  • $L^{sym} = D^{-\frac{1}{2}} L D^{-\frac{1}{2}} = I_N - D^{-\frac{1}{2}} A D^{-\frac{1}{2}}$: Symmetrically normalized graph Laplacian
  • $\hat{L} = \frac{2}{\lambda_{max}} L^{sym} - I_N$: Scaled graph Laplcian
  • $X \in \mathbb{R}^{n \times d}$: Vertex feature matrix (Input to each layer)
  • $W \in \mathbb{R}^{d \times d^{\prime}}$: Learnable weight matrix
  • $\sigma$: Activation function (ex. ReLU, softmax)

Intuitive understanding

Renormalization

$\hat{D}^{-\frac{1}{2}} \hat{A} \hat{D}^{-\frac{1}{2}} X$

$\hat{A} = I + A$

\hat{D}{ii} = \sum{j}{\hat{A}_{ij}}

Extention to Mesh Convolution

Replace graph laplacian $G$, which only encodes connectivity of mesh, to mesh laplacian $M$, which also encodes geometry of mesh.

Variable

  • $L \in \mathbb{R}^{n \times n}$: Mesh Laplacian matrix
  • $D_{ii} = L_{ii}$: Degree matrix (continuous)
  • $A = D - L$: Adjacency matrix (continuous)

Renormalization

$\hat{A} = I + A$

\hat{D}{ii} = \sum{j}{\hat{A}_{ij}}

Mesh Convolution

$$ \begin{align} f(A,X;W) =& \sigma((I_N + D^{-\frac{1}{2}} A D^{-\frac{1}{2}}) X W)\\ % =& \sigma((2I_N - \hat{L}) X W)\\ \rightarrow& \sigma(\hat{D}^{-\frac{1}{2}} \hat{A} \hat{D}^{-\frac{1}{2}} X W) \end{align} $$

Experiment: Mesh Restoration

Settings

Assign a 6-dimensional feature to each vertex and train MeshConv to restore an original mesh from a smoothed mesh. Compare the performance between GCNConv and MeshConv.

  • Learning rate: 0.001
  • Epoch: 1000
  • Metrix $\epsilon$ : MSELoss of vertex position

Results

  • MeshConv outperforms GCNConv.
  • MeshConv can restore bumpy-sphere (bottom) more accurately than GCNConv.
Input GCNConv MeshConv Ground truth
--- 0.008221 0.007452 ---
--- 0.09511 0.05705 ---

Loss transition

Dodecahedron

Bumpy-sphere

Releases

No releases published

Packages

 
 
 

Languages