Skip to content

Latest commit

 

History

History
67 lines (43 loc) · 2.51 KB

README.md

File metadata and controls

67 lines (43 loc) · 2.51 KB

Graphcore Feature Examples

The code examples demonstrate features which will enable you to make the most of the IPU. They are part of the Developer resources provided by Graphcore: https://www.graphcore.ai/developer.

Each of the examples contains its own README file with full instructions.

Poplar

Exchange data between host and IPU efficiently:

Demonstrate advanced features of Poplar:

  • Advanced Features: An example demonstrating several advanced features of Poplar, including saving and restoring Poplar executables, moving I/O into separate Poplar programs, and using our PopLibs framework.

TensorFlow 2

Debugging and analysis:

Use estimators:

  • IPUEstimator: An example showing how to use the IPUEstimator to train and evaluate a simple CNN.

Specific layers:

  • Embeddings: An example of a model with an embedding layer and an LSTM, trained on the IPU to predict the sentiment of an IMDB review.

  • Recomputation Checkpoints: An example demonstrating the checkpointing of intermediate values to reduce live memory peaks with a simple Keras LSTM model.

Efficiently use multiple IPUs and handle large models:

  • Distributed Training and Inference: This shows how to prepare a TensorFlow 2 application for distributed training and inference by using the PopDist API, and how to launch it with the PopRun distributed launcher.

PyTorch

Efficiently use multiple IPUs and handle large models:

  • Distributed Training: An example showing how to prepare a PyTorch application for distributed training and inference using the PopDist library, and how to launch it with the PopRun distributed launcher.

Define custom operators:

  • Using Custom Operators: An example showing how to create a PopART custom operator available to PopTorch and how to use it in a model.

Specific layers:

  • Octave Convolutions: An example showing how to use Octave Convolutions in PopTorch training and inference models.