PRUNE: Preserving Proximity and Global Ranking for Network Embedding
Switch branches/tags
Nothing to show
Clone or download
Fetching latest commit…
Cannot retrieve the latest commit at this time.
Failed to load latest commit information.
sample TriFac-Rank Sep 26, 2017
src add main Dec 14, 2017
tests test case Dec 14, 2017
.gitignore Initial commit Sep 24, 2017
.travis.yml TriFac-Rank Sep 26, 2017
LICENSE Initial commit Sep 24, 2017 Update Dec 14, 2017
requirements.txt update req Sep 26, 2017


Build Status

PRUNE is an unsupervised generative approach for network embedding.

Design properties PRUNE satisfies: scalability, asymmetry, unity and simplicity.

The approach entails a multi-task Siamese neural network to connect embeddings and our objective, preserving global node ranking and local proximity of nodes.

Deeper analysis for the proposed architecture and objective can be found in the paper (please see - PRUNE):

PRUNE: Preserving Proximity and Global Ranking for Network Embedding
Yi-An Lai+, Chin-Chi Hsu+, Wen-Hao Chen, Ming-Han Feng, and Shou-De Lin
Advances in Neural Information Processing Systems (NIPS), 2017
+: These authors contributed equally to this paper.

This repo contains reference implementation of PRUNE.



Run PRUNE on the sample graph:

python src/ --inputgraph sample/graph.edgelist


Check out optional arguments such as learning rate, epochs, GPU usage by:

python src/ --help


Supported graph format is the edgelist:

node_from node_to

Input graph are treated as directed.


A comma-separated table of embeddings, the k-th row represents the k-th node's embeddings:

node_0  embed_dim1, embed_dim2, ...
node_1  embed_dim1, embed_dim2, ...


Install all dependencies:

pip install -r requirements.txt

This implementation is built on tensorflow 1.1.0. If using Mac OS or encountering other problems, see detailed TensorFlow installation guild at:


If you find PRUNE useful in your research, please consider citing the paper:

PRUNE: Preserving Proximity and Global Ranking for Network Embedding, NIPS 2017.


If having any questions, please contact us: Yi-An Lai ( or Chin-Chi Hsu (