Skip to content

kirillgarbar/graph-bench

 
 

Folders and files

NameName
Last commit message
Last commit date

Latest commit

 

History

94 Commits
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 

Repository files navigation

graph-bench

JB Research License

Benchmarks suite for performance study of various graph analysis frameworks for CPU/GPU computations.

Tools description

Name Brief Platform Technology Source Page
Spla Generalized linear sparse linear algebra for multi-GPU computation GPU OpenCL link
GraphBLAST High-performance linear algebra-based graph primitives on GPUs GPU CUDA link
Gunrock High-performance graph primitives on GPUs GPU CUDA link
LaGraph Collection of graph algorithms for SuiteSparse:GraphBLAS libray CPU OpenMP link

Dataset description

Name Vertices Edges Max Degree Download
coAuthorsCiteseer 227.3K 1.6M 1372 link
coPapersDBLP 540.4K 30.4M 3299 link
hollywood-2009 1.1M 113.8M 11467 link
roadNet-CA 1.9M 5.5M 12 link
com-Orcut 3M 234M 33313 link
cit-Patents 3.7M 16.5M 793 link
rgg_n_2_22_s0 4.1M 60.7M 36 link
soc-LiveJournal 4.8M 68.9M 20333 link
indochina-2004 7.5M 194.1M 256425 link
rgg_n_2_23_s0 8.3M 127M 40 link

Manually running benchmarks

1. How to get source code

Download benchmark repository source code.

git clone https://github.com/kirillgarbar/graph-bench.git
cd graph-bench

Within repo folder init git submodule to get all source code of tools. It might take a while.

git submodule update --init --recursive

You can also track your own GraphBLAS-sharp repo and branch.

git config -f .gitmodules submodule.deps/graphblas-sharp.url https://github.com/<USERNAME>/GraphBLAS-sharp.git
git config -f .gitmodules submodule.deps/graphblas-sharp.branch <BRANCH_NAME>
git submodule sync
git submodule update --init --recursive --remote deps/graphblas-sharp/

2. How to build tools

2.1 Spla

Build bundled Spla library.

python3 scripts/build_spla.py

2.2 Gunrock

Build bundled Gunrock library.

python3 scripts/build_gunrock.py

2.3 GraphBLAST

Build bundled GraphBLAST library.

python3 scripts/build_graphblast.py

2.4 LaGraph

Build bundled SuiteSparse and LaGraph libraries.

python3 scripts/build_lagraph.py

2.5 GraphBLAS-sharp

Build bundled GraphBLAS-sharp library.

cd deps/graphblas-sharp/ && \
    dotnet tool restore && \
    dotnet build -c Release && \
    cd ../../

3. How to download data

3.1 Manual copying

Download all graphs one by one archives and extract into dataset folder. Alternatively, download all graphs within single archive from Google Drive.

3.2 Downloading with script

Graphs listed in scripts/matrices.txt file can be automatically downloaded using this script.

python3 scripts/download_matrices.py

4. How to run benchmarks

4.1 GraphBLAS-sharp

Dataset folder with all nested directories will be copied to the corresponding GraphBLAS-sharp directory, so make sure all graphs are placed in correct folders. All configs are present in scripts/configs folder.

config_graphblas-sharp.txt chooses the device and work-group size. targets_graphblas-sharp.txt contais all benchmarks that are going to be performed. targets directory contains files with graph names corresponding to each benchmark.

Run all listed benchmarks.

python3 scripts/benchmark_graphblas-sharp.py

Result will be stored in artifacts directory.

4.2 Other libraries

Run all algorithms & graphs & tools performance measurements.

python3 scripts/benchmark.py

Run particular tool for performance measurements.

python3 scripts/benchmark.py --tool=[all, spla, lagraph, gunrock, graphblast]

Run particular algorithm for performance measurements.

python3 scripts/benchmark.py --algo=[all, bfs, sssp, tc]

See help for more options.

python3 scripts/benchmark.py -h

Benchmark using workflows

This project supports automated benchmarks using github actions.

1. Getting source code

Since you'll need to commit config files and run benchmarks using your own action runner, fork of this repo is required. Once you have your own repository, follow the insctructions to set submodule for your own GraphBLAS-sharp repository. Commit these changes. You'll also need to host your own actions runner on remote machine.

2. Uploading dataset

After your runner is hosted, dataset can be downloaded remotely using a script or copied to a folder manually.

3. Benchmarking

Benchmarking workflow can be started by pushing a commit with conifgs or manually restarted in actions menu. Results are uploaded to github in action's summary as an artifact.

License

This project licensed under MIT License. License text can be found in the license file.

Acknowledgments

This is a research project of the Programming Languages and Tools Laboratory at JetBrains-Research. Laboratory website link.

About

Benchmarks suite for performance study of various graph analysis frameworks for CPU/GPU computations

Resources

License

Stars

Watchers

Forks

Releases

No releases published

Packages

No packages published

Languages

  • Python 100.0%