Skip to content

graphcore-research/unit-scaling

Repository files navigation

Unit-Scaled Maximal Update Parameterization (u-μP)

tests PyPI version license GitHub Repo stars

A library for unit scaling in PyTorch, based on the paper u-μP: The Unit-Scaled Maximal Update Parametrization and previous work Unit Scaling: Out-of-the-Box Low-Precision Training.

Documentation can be found at https://graphcore-research.github.io/unit-scaling and an example notebook at examples/demo.ipynb.

Note: The library is currently in its beta release. Some features have yet to be implemented and occasional bugs may be present. We're keen to help users with any problems they encounter.

Installation

To install the unit-scaling library, run:

pip install unit-scaling

or for a local editable install (i.e. one which uses the files in this repo), run:

pip install -e .

Development

For development in this repository, we recommend using the provided docker container. This image can be built and entered interactively using:

docker build -t unit-scaling-dev:latest .
docker run -it --rm  --user developer:developer -v $(pwd):/home/developer/unit-scaling unit-scaling-dev:latest
# To use git within the container, add `-v ~/.ssh:/home/developer/.ssh:ro -v ~/.gitconfig:/home/developer/.gitconfig:ro`.

For vscode users, this repo also contains a .devcontainer.json file, which enables the container to be used as a full-featured IDE (see the Dev Container docs for details on how to use this feature).

Key development functionality is contained within the ./dev script. This includes running unit tests, linting, formatting, documentation generation and more. Run ./dev --help for the available options. Running ./dev without arguments is equivalent to using the --ci option, which runs all of the available dev checks. This is the test used for GitHub CI.

We encourage pull requests from the community. Please reach out to us with any questions about contributing.

What is u-μP?

u-μP inserts scaling factors into the model to make activations, gradients and weights unit-scaled (RMS ≈ 1) at initialisation, and into optimiser learning rates to keep updates stable as models are scaled in width and depth. This results in hyperparameter transfer from small to large models and easy support for low-precision training.

For a quick intro, see examples/demo.ipynb, for more depth see the paper and library documentation.

What is unit scaling?

For a demonstration of the library and an overview of how it works, see Out-of-the-Box FP8 Training (a notebook showing how to unit-scale the nanoGPT model).

For a more in-depth explanation, consult our paper Unit Scaling: Out-of-the-Box Low-Precision Training.

And for a practical introduction to using the library, see our User Guide.

License

Copyright (c) 2023 Graphcore Ltd. Licensed under the Apache 2.0 License.

See NOTICE.md for further details.