Skip to content
New issue

Have a question about this project? Sign up for a free GitHub account to open an issue and contact its maintainers and the community.

By clicking “Sign up for GitHub”, you agree to our terms of service and privacy statement. We’ll occasionally send you account related emails.

Already on GitHub? Sign in to your account

Paper: Developing a Graph Convolution-Based Analysis Pipeline for Multi-Modal Neuroimage Data: An Application to Parkinson's Disease #477

Merged
merged 36 commits into from Jul 3, 2019
Merged
Changes from 1 commit
Commits
Show all changes
36 commits
Select commit Hold shift + click to select a range
8059554
working copy of first draft paper
xtianmcd May 13, 2019
68bfe9a
Rename papers/neuro_analy_pipeline_scipy2019.rst to papers/christian_…
xtianmcd May 13, 2019
6ca5483
figures 1 and 3
xtianmcd May 13, 2019
5ebf1d9
resize fig 3
xtianmcd May 13, 2019
d7a1a65
Update neuro_analy_pipeline_scipy2019.rst
xtianmcd May 13, 2019
a64e703
sizing is incorrect
xtianmcd May 13, 2019
3e48bd5
resize fig 3
xtianmcd May 13, 2019
710c776
Update neuro_analy_pipeline_scipy2019.rst
xtianmcd May 13, 2019
99c7907
initial draft
xtianmcd May 23, 2019
a82fc9a
fixes re non-build
xtianmcd May 23, 2019
dcd2de2
fixes re non-build
xtianmcd May 23, 2019
f7c0758
Merge branch '2019' of https://github.com/xtianmcd/scipy_proceedings …
xtianmcd May 23, 2019
e177406
test build
xtianmcd May 23, 2019
7ec4247
fix build warning
xtianmcd May 23, 2019
b6d98a4
fix footnote warning (attempt)
xtianmcd May 23, 2019
68f5159
fix warning re footnotes (attempt)
xtianmcd May 23, 2019
8dd3052
attempt to debug bib file
xtianmcd May 23, 2019
59c1657
fix re non-build
xtianmcd May 23, 2019
468a90a
added raw latex re bibfile
xtianmcd May 23, 2019
1e93d00
escape hyphen in bibfile
xtianmcd May 23, 2019
d631215
remove hyphen in bibfile
xtianmcd May 23, 2019
b150fad
attempt to move reference above appendix A
xtianmcd May 23, 2019
257145d
reorganization and revision re: background/ related works/ methods se…
xtianmcd Jun 3, 2019
6bdbeb9
correction of issue in in-text math (\textbf{W_{a}} needed to be \tex…
xtianmcd Jun 3, 2019
4fc0590
revisions, results, discussion, conclusion
xtianmcd Jun 10, 2019
18073b3
moved footnotes
xtianmcd Jun 10, 2019
ae9ad82
removed erroneous sentence
xtianmcd Jun 10, 2019
cafa760
change AUC scores from % to decimal; correct name of kernel used in S…
xtianmcd Jun 10, 2019
9811670
fixed malformed table
xtianmcd Jun 13, 2019
4bc549d
made suggested revisions
xtianmcd Jun 24, 2019
ee1afd5
removed appendices
xtianmcd Jun 29, 2019
c0cebb7
minor change
xtianmcd Jun 29, 2019
7a841c3
distributed footnotes
xtianmcd Jun 30, 2019
36bb604
removed raw latex line
xtianmcd Jul 1, 2019
a53c7ec
shortened GCN background section to accommodate longer References sec…
xtianmcd Jul 1, 2019
33bdd28
smaller image to reduce paper length
xtianmcd Jul 1, 2019
File filter...
Filter file types
Jump to…
Jump to file or symbol
Failed to load files and symbols.

Always

Just for now

correction of issue in in-text math (\textbf{W_{a}} needed to be \tex…

…tbf{W}_{a})), which had not been an issue previously...
  • Loading branch information...
xtianmcd committed Jun 3, 2019
commit 6bdbeb9ed6b2f62f2d358f9f5a4dd9ea9c7c4b00
@@ -147,23 +147,29 @@ Graph Attention Networks
Recent development of attention-based mechanisms allows for a weighting of each vertex based on its individual contribution during learning, thus facilitating whole-graph classifications.
In order to convert the task from classifying each node to classifying the whole graph, the features on each vertex must be pooled to generate a single feature vector for each input. The *self-attention* mechanism, widely used to compute a concise representation of a signal sequence, has been used to effectively compute the importance of graph vertices in a neighborhood :cite:`VCCRLB2018`. This allows for a weighted sum of the vertices' features during pooling.

:cite:`VCCRLB2018` use a single-layer feedforward neural network as an attention mechanism :math:`a` to compute *attention coefficients e* across pairs of vertices in a graph. For a given vertex :math:`v_{i}`, the attention mechanism attends over its first-order neighbors :math:`v_{j}`; :math:`e_{ij} = a(\textbf{W_{a}}h_{i}, \textbf{W_{a}}h_{j})`, where :math:`h_{i}` and :math:`h_{j}` are the features on vertices :math:`v_{i}` and :math:`v_{j}`, and :math:`\textbf{W_{a}}` is a shared weight matrix applied to each vertex's features. :math:`e_{ij}` is normalized via the softmax function to compute :math:`a_{ij}`: :math:`a_{ij} = softmax(e_{ij}) = exp(e_{ij}) / \sum_{k \in \mathcal{N}_{i}} exp(e_{ik})`, where :math:`\mathcal{N}_{i}` is the neighborhood of vertex :math:`v_{i}`. The new features at :math:`v_{i}` are obtained via linear combination of the original features and the normalized attention coefficients, wrapped in a nonlinearity :math:`\sigma`: :math:`h_{i}' = \sigma(\sum_{j \in \mathcal{N}_{i}} a_{ij} \textbf{W_{a}}h_{j})`. “Multi-head” attention can be used, yielding :math:`K` independent attention mechanisms that are concatenated (or averaged for the final layer). This helps to stabilize the self-attention learning process.
:cite:`VCCRLB2018` use a single-layer feedforward neural network as an attention mechanism :math:`a` to compute *attention coefficients e* across pairs of vertices in a graph. For a given vertex :math:`v_{i}`, the attention mechanism attends over its first-order neighbors :math:`v_{j}`:

.. math::
h_{i} = ||_{k=1}^{K} \sigma(\sum_{j \in \mathcal{N}_{i}} a_{ij}^{k} \textbf{W_{a}}^{k} h_{j}),
e_{ij} = a( \textbf{W}_{a}h_{i}, \textbf{W}_{a}h_{j}),
where :math:`h_{i}` and :math:`h_{j}` are the features on vertices :math:`v_{i}` and :math:`v_{j}`, and :math:`\textbf{W}_{a}` is a shared weight matrix applied to each vertex's features. :math:`e_{ij}` is normalized via the softmax function to compute :math:`a_{ij}`: :math:`a_{ij} = softmax(e_{ij}) = exp(e_{ij}) / \sum_{k \in \mathcal{N}_{i}} exp(e_{ik})`, where :math:`\mathcal{N}_{i}` is the neighborhood of vertex :math:`v_{i}`. The new features at :math:`v_{i}` are obtained via linear combination of the original features and the normalized attention coefficients, wrapped in a nonlinearity :math:`\sigma`: :math:`h_{i}' = \sigma(\sum_{j \in \mathcal{N}_{i}} a_{ij} \textbf{W}_{a}h_{j})`. “Multi-head” attention can be used, yielding :math:`K` independent attention mechanisms that are concatenated (or averaged for the final layer). This helps to stabilize the self-attention learning process.

.. math::
h_{i} = ||_{k=1}^{K} \sigma(\sum_{j \in \mathcal{N}_{i}} a_{ij}^{k} \textbf{W}_{a}^{k} h_{j}),
or

.. math::
h_{final} = \sigma(\frac{1}{K} \sum_{k=1}^{K} \sum_{j \in \mathcal{N}_{i}} a_{jk}^{k}\textbf{W_{a}}^{k} h_{j}).
h_{final} = \sigma(\frac{1}{K} \sum_{k=1}^{K} \sum_{j \in \mathcal{N}_{i}} a_{jk}^{k}\textbf{W}_{a}^{k} h_{j}).
We employ a PyTorch implementation [15]_ of :cite:`VCCRLB2018`'s :code:`GAT` class to implement a graph attention network, learning attention coefficients as

.. math::
a_{ij} = \frac{exp(LeakyReLU(a^{T}[\textbf{W_{a}}h_{i}||\textbf{W_{a}}h_{j}]))}{\sum_{k \in \mathcal{N}_{i}} exp(LeakyReLU(a^{T}[\textbf{W_{a}}h_{i}||\textbf{W_{a}}h_{k}]))},
a_{ij} = \frac{exp(LeakyReLU(a^{T}[\textbf{W}_{a}h_{i}||\textbf{W}_{a}h_{j}]))}{\sum_{k \in \mathcal{N}_{i}} exp(LeakyReLU(a^{T}[\textbf{W}_{a}h_{i}||\textbf{W}_{a}h_{k}]))},
where :math:`||` is concatenation.

ProTip! Use n and p to navigate between commits in a pull request.
You can’t perform that action at this time.