diff --git a/gnn/cluster_gcn/tensorflow2/README.md b/gnn/cluster_gcn/tensorflow2/README.md index 984adb908..b4950deb3 100644 --- a/gnn/cluster_gcn/tensorflow2/README.md +++ b/gnn/cluster_gcn/tensorflow2/README.md @@ -3,7 +3,7 @@ Cluster graph convolutional networks for node classification, using cluster samp Run our Cluster GCN training on arXiv dataset on Paperspace.
-[![Gradient](https://assets.paperspace.io/img/gradient-badge.svg)](https://console.paperspace.com/github/gradient-ai/Graphcore-Tensorflow2?machine=Free-IPU-POD16&container=graphcore%2Ftensorflow-jupyter%3A2-amd-2.6.0-ubuntu-20.04-20220804&file=%2Fget-started%2Frun_cluster_gcn_notebook.ipynb) +[![Gradient](https://assets.paperspace.io/img/gradient-badge.svg)](https://ipu.dev/3UYkV6d) | Framework | domain | Model | Datasets | Tasks| Training| Inference | Reference | |-------------|-|------|-------|-------|-------|---|---| diff --git a/gnn/tgn/pytorch/README.md b/gnn/tgn/pytorch/README.md index 03d4da076..269f73edb 100644 --- a/gnn/tgn/pytorch/README.md +++ b/gnn/tgn/pytorch/README.md @@ -1,36 +1,74 @@ # Temporal Graph Networks -This directory contains a PyTorch implementation of [Temporal Graph Networks](https://arxiv.org/abs/2006.10637) to train on IPU. -This implementation is based on [`examples/tgn.py`](https://github.com/rusty1s/pytorch_geometric/blob/master/examples/tgn.py) from PyTorch-Geometric. +Temporal graph networks for link prediction in dynamic graphs, based on [`examples/tgn.py`](https://github.com/rusty1s/pytorch_geometric/blob/master/examples/tgn.py) from PyTorch-Geometric, optimised for Graphcore's IPU. -## Running on IPU +Run our TGN on paperspace. +
+[![Gradient](https://assets.paperspace.io/img/gradient-badge.svg)](https://ipu.dev/3uUI2nt) -### Setting up the environment -Install the Poplar SDK following the [Getting Started](https://docs.graphcore.ai/en/latest/getting-started.html) guide for the IPU system. -Source the `enable.sh` scripts for Poplar and PopART and activate a Python virtualenv with PopTorch installed. +| Framework | domain | Model | Datasets | Tasks| Training| Inference | Reference | +|-------------|-|------|-------|-------|-------|---|---| +| Pytorch | GNNs | TGN | JODIE | Link prediction | ✅ | ❌ | [Temporal Graph Networks for Deep Learning on Dynamic Graphs](https://arxiv.org/abs/2006.10637v3) | -Now install the dependencies of the TGN model: + +## Instructions summary + +1. Install and enable the Poplar SDK (see Poplar SDK setup) + +2. Install the system and Python requirements (see Environment setup) + + +## Poplar SDK setup +To check if your Poplar SDK has already been enabled, run: +```bash + echo $POPLAR_SDK_ENABLED + ``` + +If no path is provided, then follow these steps: +1. Navigate to your Poplar SDK root directory + +2. Enable the Poplar SDK with: +```bash +cd poplar--- +. enable.sh +``` + +3. Additionally, enable PopArt with: +```bash +cd popart--- +. enable.sh +``` + +More detailed instructions on setting up your environment are available in the [poplar quick start guide](https://docs.graphcore.ai/projects/graphcloud-poplar-quick-start/en/latest/). + + +## Environment setup +To prepare your environment, follow these steps: + +1. Create and activate a Python3 virtual environment: ```bash -pip install -r requirements.txt +python3 -m venv +source /bin/activate ``` -### Train the model -To train the model run +2. Navigate to the Poplar SDK root directory + +3. Install the PopTorch (Pytorch) wheel: +```bash +cd +pip3 install poptorch...x86_64.whl +``` + +4. Navigate to this example's root directory + +5. Install the Python requirements: ```bash -python train.py +pip3 install -r requirements.txt ``` -The following flags can be used to adjust the behaviour of `train.py` ---data: directory to load/save the data (default: data/JODIE)
--t, --target: device to run on (choices: {ipu, cpu}, default: ipu)
--d, --dtype: floating point format (default: float32)
--e, --epochs: number of epochs to train for (default: 50)
---lr: learning rate (default: 0.0001)
---dropout: dropout rate in the attention module (default: 0.1)
---optimizer, Optimizer (choices: {SGD, Adam}, default: Adam)
+## Running and benchmarking -### Running and benchmarking To run a tested and optimised configuration and to reproduce the performance shown on our [performance results page](https://www.graphcore.ai/performance-results), use the `examples_utils` module (installed automatically as part of the environment setup) to run one or more benchmarks. The benchmarks are provided in the `benchmarks.yml` file in this example's root directory. For example: @@ -51,4 +89,4 @@ For more information on using the examples-utils benchmarking module, please ref ### License This application is licensed under the MIT license, see the LICENSE file at the top-level of this repository. -This directory includes derived work from the PyTorch Geometric repository, https://github.com/pyg-team/pytorch_geometric by Matthias Fey and Jiaxuan You, published under the MIT license +This directory includes derived work from the PyTorch Geometric repository, https://github.com/pyg-team/pytorch_geometric by Matthias Fey and Jiaxuan You, published under the MIT license \ No newline at end of file diff --git a/nlp/bert/pytorch/README.md b/nlp/bert/pytorch/README.md index 603532120..d2e3d9243 100644 --- a/nlp/bert/pytorch/README.md +++ b/nlp/bert/pytorch/README.md @@ -3,7 +3,7 @@ Bidirectional Encoder Representations from Transformers for NLP pre-training and Run our BERT-L Fine-tuning on SQuAD dataset on Paperspace.
-[![Gradient](https://assets.paperspace.io/img/gradient-badge.svg)](https://bash.paperspace.com/github/gradient-ai/Graphcore-PyTorch?machine=Free-IPU-POD16&container=graphcore%2Fpytorch-jupyter%3A2.6.0-ubuntu-20.04-20220804&file=%2Fget-started%2FFine-tuning-BERT.ipynb) +[![Gradient](https://assets.paperspace.io/img/gradient-badge.svg)](https://ipu.dev/3WiyZIC) | Framework | domain | Model | Datasets | Tasks| Training| Inference | Reference | |-------------|-|------|-------|-------|-------|---|---| @@ -67,7 +67,7 @@ pip3 install poptorch...x86_64.whl sudo apt install $(< required_apt_packages.txt) ``` -5. Install the Python requirements: +6. Install the Python requirements: ```bash pip3 install -r requirements.txt ``` diff --git a/vision/vit/pytorch/README.md b/vision/vit/pytorch/README.md index efc60b950..3d7c3a169 100644 --- a/vision/vit/pytorch/README.md +++ b/vision/vit/pytorch/README.md @@ -1,6 +1,10 @@ # ViT (Vision Transformer) Vision Transformer for image recognition, optimised for Graphcore's IPU. Based on the models provided by the [`transformers`](https://github.com/huggingface/transformers) library and from [jeonsworld](https://github.com/jeonsworld/ViT-pytorch) +Run our ViT on Paperspace. +
+[![Gradient](https://assets.paperspace.io/img/gradient-badge.svg)](https://ipu.dev/3uTF5Uj) + | Framework | domain | Model | Datasets | Tasks| Training| Inference | Reference | |-------------|-|------|-------|-------|-------|---|-------| | Pytorch | Vision | ViT | ImageNet LSVRC 2012, CIFAR-10 | Image recognition | ✅ | ✅ | [An Image is Worth 16x16 Words: Transformers for Image Recognition at Scale](https://arxiv.org/abs/2010.11929) |