Skip to content
/ ample Public

Graph Neural Network inference accelerator for Ultrascale+ FPGAs designed for multi-precision inference on large graphs.

Notifications You must be signed in to change notification settings

pgimenes/ample

Repository files navigation


AGILE Accelerator Image

AMPLE: Accelerated Message Passing Logic Engine

An FPGA accelerator for Graph Neural Networks following the Message Passing Mechanism.
Explore the docs »

View Demo · Report Bug · Request Feature

Table of Contents
  1. Overview
  2. Getting Started
  3. Usage
  4. Contributing

Overview

In recent times, Graph Neural Networks (GNNs) have attracted great attention due to their performance on non-Euclidean data. Custom hardware acceleration proves particularly beneficial for GNNs given their irregular memory access patterns, resulting from the sparse structure of the graphs. Despite the relative success of hardware approaches to accelerate GNN inference on FPGA devices, previous works are limited to small graphs with up to 20k nodes, such as Cora, Citeseer and Pubmed. Since the computational overhead of GNN inference grows with increasing graph size, existing accelerators are unable to process medium to large-scale graphs.

AGILE is an FPGA accelerator aimed at enabling GNN inference on large graphs by exploring a range of hardware optimisations:

  • Event-driven programming flow, which reduces pipeline gaps by addressing the non-uniform distribution in node degrees.
  • Multi-precision dataflow architecture, enabling quantized GNN inference in hardware at node granularity.
  • Efficient prefetcher unit is implemented to support the large graph use case

Evaluation on the set of Planetoid graphs, containing up to 19717 nodes, shows up to 2.8x speed-up against GPU counterparts, and up to 6.6x against CPU.

(back to top)

Getting Started

Follow these instructions to set up your workarea. The following steps assume you have Vivado 2019.2 and Modelsim 2019.2 installed.

  1. Start by cloning the repository.
git clone https://github.com/pgimenes/agile.git
  1. Set the WORKAREA environment variable.
cd agile
export WORKAREA=$(pwd)
  1. If you don't have conda installed yet, download the installation file for your platform from the link and execute with all default settings. For example:
wget https://repo.anaconda.com/archive/Anaconda3-2023.09-0-Linux-x86_64.sh
chmod +x Anaconda3-2023.09-0-Linux-x86_64.sh
./Anaconda3-2023.09-0-Linux-x86_64.sh -b
  1. Create a conda environment from the defined yaml file and install pip dependencies.
conda env create -f environment.yml
conda activate agile
pip install -r $WORKAREA/requirements.txt

Note: a common error is that conda does not update the path to use the environment version of python and pip. Check this by running which pip and ensuring this points to a path within your anaconda installation.

  1. Run the build script to update submodules, build register banks and the Vivado build project. This will ask you for the Airhdl password associated with the project. Contact a project contributor for access to this.
source $WORKAREA/scripts/build.sh
  1. Generate the simulation payloads. For example, for the KarateClub dataset:
python3 $WORKAREA/scripts/initialize.py --karate --gcn --payloads --random
  1. Build the testbench.
cd $WORKAREA/hw/sim
make build
  1. Run the simulation.
make sim GUI=1

(back to top)

Contributing

If you have a suggestion that would make this better, please fork the repo and create a pull request. You can also simply open an issue with the tag "enhancement". Don't forget to give the project a star! Thanks again!

  1. Fork the Project
  2. Create your Feature Branch (git checkout -b feature/AmazingFeature)
  3. Commit your Changes (git commit -m 'Add some AmazingFeature')
  4. Push to the Branch (git push origin feature/AmazingFeature)
  5. Open a Pull Request

(back to top)

About

Graph Neural Network inference accelerator for Ultrascale+ FPGAs designed for multi-precision inference on large graphs.

Resources

Stars

Watchers

Forks

Releases

No releases published