Skip to content

A general and high-performance graph embedding system for various applications

License

Notifications You must be signed in to change notification settings

oztc/graphvite

 
 

Folders and files

NameName
Last commit message
Last commit date

Latest commit

 

History

6 Commits
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 

Repository files navigation

GraphVite logo

GraphVite - graph embedding at high speed and large scale

Install with conda License

GraphVite is a general graph embedding engine, dedicated to high-speed and large-scale embedding learning in various applications.

GraphVite provides complete training and evaluation pipelines for 3 applications: node embedding, knowledge graph embedding and graph & high-dimensional data visualization. Besides, it also includes 9 popular models, along with their benchmarks on a bunch of standard datasets.

Node Embedding Knowledge Graph Embedding Graph & High-dimensional Data Visualization

Here is a summary of the training time of GraphVite along with the best open-source implementations on 3 applications. All the time is reported based on a server with 24 CPU threads and 4 V100 GPUs.

Node embedding on Youtube dataset.

Model Existing Implementation GraphVite Speedup
DeepWalk 1.64 hrs (CPU parallel) 1.19 mins 82.9x
LINE 1.39 hrs (CPU parallel) 1.17 mins 71.4x
node2vec 24.4 hrs (CPU parallel) 4.39 mins 334x

Knowledge graph embedding on FB15k dataset.

Model Existing Implementation GraphVite Speedup
TransE 1.31 hrs (1 GPU) 14.8 mins 5.30x
RotatE 3.69 hrs (1 GPU) 27.0 mins 8.22x

High-dimensional data visualization on MNIST dataset.

Model Existing Implementation GraphVite Speedup
LargeVis 15.3 mins (CPU parallel) 15.1 s 60.8x

Requirements

Generally, GraphVite works on any Linux distribution with CUDA >= 9.2.

The library is compatible with Python 2.7 and 3.6/3.7.

Installation

From Conda

GraphVite can be installed through conda with only one line.

conda install -c milagraph graphvite cudatoolkit=x.x

where x.x is your CUDA version, e.g. 9.2 or 10.0.

If you only need embedding training without evaluation, you can use the following alternative with minimal dependencies.

conda install -c milagraph graphvite-mini cudatoolkit=x.x

From Source

Before installation, make sure you have conda installed.

git clone https://github.com/DeepGraphLearning/graphvite
cd graphvite
conda install -y --file conda/requirements.txt
mkdir build
cd build && cmake .. && make && cd -
cd python && python setup.py install && cd -

Quick Start

Here is a quick-start example of the node embedding application.

graphvite baseline quick start

Typically, the example takes no more than 1 minute. You will obtain some output like

Batch id: 6000
loss = 0.371641

macro-F1@20%: 0.236794
micro-F1@20%: 0.388110

Baseline Benchmark

To reproduce a baseline benchmark, you only need to specify the keywords of the experiment. e.g. model and dataset.

graphvite baseline [keyword ...] [--no-eval] [--gpu n] [--cpu m]

You may also set the number of GPUs and the number of CPUs per GPU.

Use graphvite list to get a list of available baselines.

High-dimensional Data Visualization

You can visualize your high-dimensional vectors with a simple command line in GraphVite.

graphvite visualize [file] [--label label_file] [--save save_file] [--perplexity n] [--3d]

The file can be either in numpy dump or text format. For the save file, we recommend to use a png format, while pdf is also supported.

Contributing

We welcome all contributions from bug fixs to new features. Please let us know if you have any suggestion to our library.

Development Team

GraphVite is developed by MilaGraph, led by Prof. Jian Tang.

Authors of this project are Zhaocheng Zhu, Shizhen Xu, Meng Qu and Jian Tang. Contributors include Kunpeng Wang and Zhijian Duan.

Citation

If you find GraphVite useful for your research or development, please cite the following paper.

@inproceedings{zhu2019graphvite,
    title={GraphVite: A High-Performance CPU-GPU Hybrid System for Node Embedding},
     author={Zhu, Zhaocheng and Xu, Shizhen and Qu, Meng and Tang, Jian},
     booktitle={The World Wide Web Conference},
     pages={2494--2504},
     year={2019},
     organization={ACM}
 }

Acknowledgements

We would like to thank Compute Canada for supporting GPU servers. We specially thank Wenbin Hou for useful discussions on C++ and GPU programming techniques.

About

A general and high-performance graph embedding system for various applications

Resources

License

Stars

Watchers

Forks

Releases

No releases published

Packages

No packages published

Languages

  • Cuda 47.4%
  • C++ 27.3%
  • Python 21.7%
  • CMake 3.6%