Skip to content
Merged
Show file tree
Hide file tree
Changes from all commits
Commits
File filter

Filter by extension

Filter by extension

Conversations
Failed to load comments.
Loading
Jump to
Jump to file
Failed to load files.
Loading
Diff view
Diff view
2 changes: 1 addition & 1 deletion gnn/cluster_gcn/tensorflow2/README.md
Original file line number Diff line number Diff line change
Expand Up @@ -3,7 +3,7 @@ Cluster graph convolutional networks for node classification, using cluster samp

Run our Cluster GCN training on arXiv dataset on Paperspace.
<br>
[![Gradient](https://assets.paperspace.io/img/gradient-badge.svg)](https://console.paperspace.com/github/gradient-ai/Graphcore-Tensorflow2?machine=Free-IPU-POD16&container=graphcore%2Ftensorflow-jupyter%3A2-amd-2.6.0-ubuntu-20.04-20220804&file=%2Fget-started%2Frun_cluster_gcn_notebook.ipynb)
[![Gradient](https://assets.paperspace.io/img/gradient-badge.svg)](https://ipu.dev/3UYkV6d)

| Framework | domain | Model | Datasets | Tasks| Training| Inference | Reference |
|-------------|-|------|-------|-------|-------|---|---|
Expand Down
80 changes: 59 additions & 21 deletions gnn/tgn/pytorch/README.md
Original file line number Diff line number Diff line change
@@ -1,36 +1,74 @@
# Temporal Graph Networks

This directory contains a PyTorch implementation of [Temporal Graph Networks](https://arxiv.org/abs/2006.10637) to train on IPU.
This implementation is based on [`examples/tgn.py`](https://github.com/rusty1s/pytorch_geometric/blob/master/examples/tgn.py) from PyTorch-Geometric.
Temporal graph networks for link prediction in dynamic graphs, based on [`examples/tgn.py`](https://github.com/rusty1s/pytorch_geometric/blob/master/examples/tgn.py) from PyTorch-Geometric, optimised for Graphcore's IPU.

## Running on IPU
Run our TGN on paperspace.
<br>
[![Gradient](https://assets.paperspace.io/img/gradient-badge.svg)](https://ipu.dev/3uUI2nt)

### Setting up the environment
Install the Poplar SDK following the [Getting Started](https://docs.graphcore.ai/en/latest/getting-started.html) guide for the IPU system.
Source the `enable.sh` scripts for Poplar and PopART and activate a Python virtualenv with PopTorch installed.
| Framework | domain | Model | Datasets | Tasks| Training| Inference | Reference |
|-------------|-|------|-------|-------|-------|---|---|
| Pytorch | GNNs | TGN | JODIE | Link prediction | ✅ | ❌ | [Temporal Graph Networks for Deep Learning on Dynamic Graphs](https://arxiv.org/abs/2006.10637v3) |

Now install the dependencies of the TGN model:

## Instructions summary

1. Install and enable the Poplar SDK (see Poplar SDK setup)

2. Install the system and Python requirements (see Environment setup)


## Poplar SDK setup
To check if your Poplar SDK has already been enabled, run:
```bash
echo $POPLAR_SDK_ENABLED
```

If no path is provided, then follow these steps:
1. Navigate to your Poplar SDK root directory

2. Enable the Poplar SDK with:
```bash
cd poplar-<OS version>-<SDK version>-<hash>
. enable.sh
```

3. Additionally, enable PopArt with:
```bash
cd popart-<OS version>-<SDK version>-<hash>
. enable.sh
```

More detailed instructions on setting up your environment are available in the [poplar quick start guide](https://docs.graphcore.ai/projects/graphcloud-poplar-quick-start/en/latest/).


## Environment setup
To prepare your environment, follow these steps:

1. Create and activate a Python3 virtual environment:
```bash
pip install -r requirements.txt
python3 -m venv <venv name>
source <venv path>/bin/activate
```

### Train the model
To train the model run
2. Navigate to the Poplar SDK root directory

3. Install the PopTorch (Pytorch) wheel:
```bash
cd <poplar sdk root dir>
pip3 install poptorch...x86_64.whl
```

4. Navigate to this example's root directory

5. Install the Python requirements:
```bash
python train.py
pip3 install -r requirements.txt
```

The following flags can be used to adjust the behaviour of `train.py`

--data: directory to load/save the data (default: data/JODIE) <br>
-t, --target: device to run on (choices: {ipu, cpu}, default: ipu) <br>
-d, --dtype: floating point format (default: float32) <br>
-e, --epochs: number of epochs to train for (default: 50) <br>
--lr: learning rate (default: 0.0001) <br>
--dropout: dropout rate in the attention module (default: 0.1) <br>
--optimizer, Optimizer (choices: {SGD, Adam}, default: Adam) <br>
## Running and benchmarking

### Running and benchmarking
To run a tested and optimised configuration and to reproduce the performance shown on our [performance results page](https://www.graphcore.ai/performance-results), use the `examples_utils` module (installed automatically as part of the environment setup) to run one or more benchmarks. The benchmarks are provided in the `benchmarks.yml` file in this example's root directory.

For example:
Expand All @@ -51,4 +89,4 @@ For more information on using the examples-utils benchmarking module, please ref
### License
This application is licensed under the MIT license, see the LICENSE file at the top-level of this repository.

This directory includes derived work from the PyTorch Geometric repository, https://github.com/pyg-team/pytorch_geometric by Matthias Fey and Jiaxuan You, published under the MIT license
This directory includes derived work from the PyTorch Geometric repository, https://github.com/pyg-team/pytorch_geometric by Matthias Fey and Jiaxuan You, published under the MIT license
4 changes: 2 additions & 2 deletions nlp/bert/pytorch/README.md
Original file line number Diff line number Diff line change
Expand Up @@ -3,7 +3,7 @@ Bidirectional Encoder Representations from Transformers for NLP pre-training and

Run our BERT-L Fine-tuning on SQuAD dataset on Paperspace.
<br>
[![Gradient](https://assets.paperspace.io/img/gradient-badge.svg)](https://bash.paperspace.com/github/gradient-ai/Graphcore-PyTorch?machine=Free-IPU-POD16&container=graphcore%2Fpytorch-jupyter%3A2.6.0-ubuntu-20.04-20220804&file=%2Fget-started%2FFine-tuning-BERT.ipynb)
[![Gradient](https://assets.paperspace.io/img/gradient-badge.svg)](https://ipu.dev/3WiyZIC)

| Framework | domain | Model | Datasets | Tasks| Training| Inference | Reference |
|-------------|-|------|-------|-------|-------|---|---|
Expand Down Expand Up @@ -67,7 +67,7 @@ pip3 install poptorch...x86_64.whl
sudo apt install $(< required_apt_packages.txt)
```

5. Install the Python requirements:
6. Install the Python requirements:
```bash
pip3 install -r requirements.txt
```
Expand Down
4 changes: 4 additions & 0 deletions vision/vit/pytorch/README.md
Original file line number Diff line number Diff line change
@@ -1,6 +1,10 @@
# ViT (Vision Transformer)
Vision Transformer for image recognition, optimised for Graphcore's IPU. Based on the models provided by the [`transformers`](https://github.com/huggingface/transformers) library and from [jeonsworld](https://github.com/jeonsworld/ViT-pytorch)

Run our ViT on Paperspace.
<br>
[![Gradient](https://assets.paperspace.io/img/gradient-badge.svg)](https://ipu.dev/3uTF5Uj)

| Framework | domain | Model | Datasets | Tasks| Training| Inference | Reference |
|-------------|-|------|-------|-------|-------|---|-------|
| Pytorch | Vision | ViT | ImageNet LSVRC 2012, CIFAR-10 | Image recognition | ✅ | ✅ | [An Image is Worth 16x16 Words: Transformers for Image Recognition at Scale](https://arxiv.org/abs/2010.11929) |
Expand Down