Skip to content

Releases: horovod/horovod

Process sets, XLA support, improved GPU backend

06 Oct 17:52
66ad6d5
Compare
Choose a tag to compare

Added

  • Added process sets to concurrently run collective operations on subsets of Horovod processes in TensorFlow, PyTorch, and MXNet. (#2839, #3042, #3043, #3054, #3083, #3090)

  • Added XLA support for Allreduce via tf.function(jit_compile=True). (#3053)

  • Added fused buffer scaling and unpack/pack kernels on GPU. (#2973)

  • Added support for NCCL on CUDA 11.4. (#3182)

  • Added fp16 compression for MXNet. (#2987)

  • Added terminate_on_nan flag to Spark Lightning estimator. (#3088)

  • Added barrier() API to torch module to support simple synchronization among ranks and to achieve parity with PyTorch DDP and similar frameworks. #3139

  • Added params for customizing Tensorboard callback. (#3153)

  • Added hvd.cross_rank() for keras. (#3008)

  • Added barrier() API to torch module to support simple synchronization among ranks and to achieve parity with PyTorch DDP and similar frameworks. #3139

Changed

  • Implemented more asynchronous dependency handling on GPU. (#2963)

  • Ray: RayExecutor will now use the current placement group instead of always creating a new one. (#3134)

  • Lightning: turned off shuffling for validation dataset. (#2974)

  • Ray: RayExecutor will use the current placement group if one exists. (#3134)

  • Extended hvd.join() to return the last rank that joined. (#3097)

Removed

  • Spark/Keras: remove bare Keras support. (#3191)

Fixed

  • Fix Horovod develop/editable install mode and incremental builds. (#3074)

  • Estimator/Lightning: use lightning datamodule. (#3084)

  • Fix Horovod Spark StringType and numpy type mapping issue. (#3146)

  • Fixed error in Keras LearningRateScheduler. (#3135)

  • Fixed bug in Lightning Profiler on Ray. (#3122)

  • Fixed torch op lazy release to prevent OOM in elastic training. (#3110)

  • Lightning: Fixed usage of the checkpoint callback. (#3186)

  • Fixed MPICH support to use Intel MPI's implementation. (#3148)

  • Fixed race condition in PyTorch async dataloader. (#3120)

  • Keras: Fixed learning rate scheduler. (#3142, #3135)

Remote filesystem support, estimator fixes

10 Jun 15:57
93a2f25
Compare
Choose a tag to compare

Added

  • Estimator: added support for loading data from S3, GCS, ADLS, and other remote filesystems. (#2927)

  • Estimator: added custom Spark data loader interface. (#2938)

  • LightningEstimator: added support to supply a logger and associated parameter to control the frequency of logging. (#2926)

  • Estimator: added check to ensure all ranks have the same device type. (#2942)

Changed

  • Changed behavior from using TensorBoardLogger to now using it as a fallback if a logger is not supplied. (#2926)

  • Ray: disabled capturing child tasks in placement group. (#2920)

Fixed

  • Fixed hvd.tensorflow.keras.Compression, accidentally removed in v0.22.0. (#2945)

  • TorchEstimator: fixed usage of validation_steps in place of validation_steps_per_epoch. (#2918)

  • TensorFlow: fixed C++ API for TF v2.6.0. (#2932)

  • PyTorch: fixed sparse_allreduce_async for PyTorch v0.10.0. (#2965)

PyTorch Lightning Estimator, Nsight profiling, PyTorch 1.9 support

19 May 15:17
3ff9480
Compare
Choose a tag to compare

Added

  • Added pytorch_lightning spark estimator which enables training pytorch_lightning models. (#2713)

  • Added NVTX tracing hooks for profiling with Nsight Systems. (#2723)

  • Added a generic num_workers API for RayExecutor (#2870)

  • Supports Ray Client without code changes. (#2882)

  • Supports inmemory cache option for Keras Estimator. (#2896)

  • Added FP16 support for GPU tensor in mxnet. (#2915)

  • Added response caching for allgather operations. (#2872)

  • Estimator: add petastorm reader_pool_type into constructor (#2903)

Changed

  • Changed alltoall to return the received splits as a second return value if non-uniform splits are sent. (#2631)

  • Changed RayExecutor to use Ray Placement Groups for worker colocation. (#2824)

  • Changed Inmemory dataloader usage for Torch Estimator with petastorm v0.11.0 release. (#2896)

Fixed

  • Changed RayExecutor to use Ray node ID to enable multi-container:single-host setups. (#2883)

  • Support sparse gradients aggregation in TF1 Keras. (#2879)

  • Respect global_step parameter for LegacyOptimizers when aggregating gradients. (#2879)

  • Fixed compatibility with PyTorch 1.9.0. (#2829)

Local Gradient Aggregation, Grouped Allreduce

23 Nov 23:57
7d71874
Compare
Choose a tag to compare

Detailed Changes

Added

  • Added support for backward_passes_per_step > 1 for TF Keras graph mode. (#2346)

  • Added support for backward_passes_per_step > 1 for TF Keras eager execution. (#2371)

  • Added support for backward_passes_per_step > 1 for TF LegacyOptimizer in graph mode. (#2401)

  • Added grouped allreduce to enable more efficient tensor fusion and deterministic training. (#2453)

  • Add support for specifying op and compression in horovod.tensorflow.keras.allreduce(). (#2423)

  • Adding support for batched D2D memcopy kernel on GPU. (#2435)

  • Added schema inference in Spark Estimator without sampling. (#2373)

  • Added Store.create("dbfs:/") mapping to DBFSLocalStore("/dbfs/..."). (#2376)

Changed

  • Changed Keras callbacks to require parameter initial_lr of LearningRateScheduleCallback and LearningRateWarmupCallback. (#2459)

  • Changed default cycle time from 5ms to 1ms and fusion threshold from 64MB to 128MB. (#2468)

Fixed

  • Fixed support for TensorFlow v2.4.0. (#2381)

  • Fixed averaging using CUDA half2 implementation one element half buffers. (#2375)

  • Fixed HOROVOD_THREAD_AFFINITY when using oneCCL. (#2350)

  • Added timeout to SSH check in horovodrun to prevent hanging. (#2448)

  • Added HOROVOD_GLOO_TIMEOUT_SECONDS value to error messages. (#2436)

  • Fixed race condition in dynamic timeline API. (#2341)

  • Fixed --log-hide-timestamp to apply to driver logs with Gloo. (#2388)

Elastic Horovod on Ray

01 Oct 14:56
b3c4d81
Compare
Choose a tag to compare

Detailed Changes

Added

  • Added Elastic Ray integration. (#2291)

Changed

  • Removed dependency on SSH access for Ray. (#2275)

Hotfix: build without MXNet installed

26 Sep 02:58
cef4393
Compare
Choose a tag to compare

Detailed Changes

Fixed

  • Fixed building Horovod without HOROVOD_WITHOUT_MXNET when MXNet is not installed. (#2334)

Bugfixes, Databricks Runtime support for Estimators, ElasticSampler

25 Sep 19:38
4099c2b
Compare
Choose a tag to compare

Detailed Changes

Added

  • Added Databricks storage DBFSLocalStore and support for GPU-aware scheduling to horovod.spark Estimator. (#2234)

  • Added ElasticSampler and PyTorch Elastic ImageNet example. (#2297)

  • Added ability to dynamically start and stop timeline programmatically. (#2215)

  • Added support for Gloo on macOS. (#2254)

  • Exposed name argument to TensorFlow allreduce operation. (#2325)

  • Added option to strip outer name scope from Horovod ops in TensorFlow. (#2328)

Fixed

  • Fixed usage of VERBOSE=1 when setting custom MAKEFLAGS. (#2239)

  • Fixed bugs in Keras Elastic Callback classes. (#2289)

  • Fixed RelWithDebInfo build and made it the default with -03 optimizations. (#2305)

  • Fixed usage of tf.cond in TensorFlow alltoall gradient. (#2327)

  • Fixed allreduce averaging for TF IndexedSlices in ROCm path. (#2279)

  • Include stdexcept to handle certain compiler / frameworks that don't include it already. (#2238)

  • Fixed Debug builds by setting compiler options based on CMake build type. (#2263)

  • Skipped launching zero-sized send/recvs for NCCLAlltoall. (#2273)

  • Fixed missing run in tf keras elastic mode. (#2272)

  • Fixed loss function in TensorFlow2 elastic synthetic benchmark. (#2265)

  • Fixed usage of HOROVOD_MIXED_INSTALL env var in alltoall tests. (#2266)

  • Removed keras requirement from Ray example. (#2262)

Elastic Horovod, Ray integration, All-to-All, Gradient Predivide, CMake build system

04 Sep 00:34
396c131
Compare
Choose a tag to compare

Elastic Horovod API + Spark Auto-Scaling (#1849, #1956)

Elastic training enables Horovod to scale up and down the number of workers dynamically at runtime, without requiring a restart or resuming from checkpoints saved to durable storage. With elastic training, workers can come and go from the Horovod job without interrupting the training process.

Support for auto-scaling can be added to any existing Horovod script with just a few modifications:

  1. Decorate retryable functions with @hvd.elastic.run.
  2. Track state that needs to be kept in sync across workers in a hvd.elastic.State object.
  3. Perform all Horovod collective operations (allreduce, allgather, broadcast, etc.) inside the retryable functions.

Here's an example for PyTorch:

import torch
import horovod.torch as hvd

hvd.init()
torch.cuda.set_device(hvd.local_rank())

model = ...
dataset = ...

@hvd.elastic.run
def train(state):
    for state.epoch in range(state.epoch, args.epochs + 1):
        dataset.set_epoch(state.epoch)
        dataset.set_batch_idx(state.batch_idx)
        for state.batch_idx, (data, target) in enumerate(dataset):
            state.optimizer.zero_grad()
            output = state.model(data)
            loss = F.nll_loss(output, target)
            loss.backward()
            state.optimizer.step()
            state.commit()

optimizer = optim.SGD(model.parameters(), lr * hvd.size())
optimizer = hvd.DistributedOptimizer(optimizer)

def on_state_reset():
    # adjust learning rate on reset
    for param_group in optimizer.param_groups:
        param_group['lr'] = lr * hvd.size()

state = hvd.elastic.TorchState(model, optimizer, epoch=1, batch_idx=0)
state.register_reset_callbacks([on_state_reset])
train(state)

Run using horovodrun by specifying the minimum and maximum number of worker processes, as well as a "host discovery script" that will be used to find available workers to add at runtime:

$ horovodrun -np 8 --min-np 4 --max-np 12 --host-discovery-script discover_hosts.sh python train.py

Elastic Horovod is supported natively with Spark auto-scaling using the hvd.spark.run_elastic API.

For more details, see Elastic Horovod.

Horovod on Ray (#2218)

Ray is a distributed execution framework that makes it easy to provision and scale distributed applications, and can now be used to execute Horovod jobs without needing to coordinate the workers by hand:

from horovod.ray import RayExecutor

# Start the Ray cluster or attach to an existing Ray cluster
ray.init()

# Start num_hosts * num_slots actors on the cluster
executor = RayExecutor(
    setting, num_hosts=num_hosts, num_slots=num_slots, use_gpu=True)

# Launch the Ray actors on each machine
# This will launch `num_slots` actors on each machine
executor.start()

# Using the stateless `run` method, a function can take in any args or kwargs
def train_fn():
    hvd.init()
    # Train the model on each worker here
    ...

# Execute the function on all workers at once
results = executor.run(train_fn)

executor.shutdown()

Horovod now also integrates with Ray Tune to scale up your hyperparameter search jobs. Check out the example here.

For more details, see Horovod on Ray.

All-to-All Operation (#2143)

The all-to-all collective can be described as a combination of a scatter and gather, where each worker will scatter a tensor to each worker, while also gathering scattered data from other workers. This type of collective communication can arise in model-parallel training strategies.

The hvd.alltoall function takes the form hvd.alltoall(tensor, splits=None),
where tensor is a multi-dimensional tensor of data to scattered and splits is an optional 1D tensor of integers with length equal to the number of workers, describing how to split and distribute tensor. splits is applied along the first dimension of tensor. If splits is not provided, an equal splitting is assumed, where the first dimension is divided by the number of workers.

The implementation supports TensorFlow, PyTorch, and MXNet using the MPI backend, the CUDA-aware MPI backend via HOROVOD_GPU_ALLTOALL=MPI, and the NCCL backend via HOROVOD_GPU_ALLTOALL=NCCL / HOROVOD_GPU_OPERATIONS=NCCL.

Gradient Predivide Factor (#1949)

We've added a gradient_predivide_factor parameter in the DistributedOptimizer, the purpose of which is to enable splitting the averaging before and after the allreduce. This can be useful in managing the numerical range for mixed precision computations.

The gradient_predivide_factor is applied as follows:

        If op == Average, gradient_predivide_factor splits the averaging
        before and after the sum. Gradients are scaled by
        1.0 / gradient_predivide_factor before the sum and
        gradient_predivide_factor / size after the sum. 

To facilitate this, additional arguments (prescale_factor and postscale_factor) have been added to the basic hvd.allreduce functions, enabling the definition of multiplicative factors to scale the tensors before and after the allreduce respectively. For efficiency, the pre and post-scaling is implemented in the Horovod backend on the fused tensor buffer, rather than through framework level operations. For GPU, this required a CUDA kernel implementation to scale the GPU buffer which in turn, required adding compilation of CUDA code to the current build infrastructure.

As an additional general benefit from these changes, gradient averaging in the optimizer can now be carried out within the Horovod backend on the fused tensor buffer using the postscale_factor argument, rather than on a tensor by tensor basis at the framework level, decreasing the overhead of each allreduce call.

CMake Build System (#2009)

CMake, previously used to compile the optional Gloo controller, is now required to install Horovod. This change introduces a number of exciting benefits for Horovod developers and users:

  • Much faster installation times through a parallel task build
  • Incremental builds (almost instantaneous build when developing and making small changes at a time)
  • Separation of the build config phase with the build phase (less overhead for repeated builds)
  • Reuse find_package modules provided by CMake for MPI, CUDA, etc. to better handle a range of environment configurations
  • Libraries can be built outside of the python build process (no longer requiring setup.py)
  • Flexibility for the build system (make, ninja, IDEs, etc.)

Detailed Changes

Added

  • Added bare-metal elastic mode implementation to enable auto-scaling and fault tolerance. (#1849)

  • Added Elastic Horovod support for Spark auto-scaling. (#1956)

  • Added All-to-All operation for TensorFlow, PyTorch, and MXNet. (#2143)

  • Added support for gradient_predivide_factor and averaging in Horovod backend. (#1949)

  • Added NCCL implementation of the allgather operation. (#1952)

  • Added HOROVOD_GPU_OPERATIONS installation variable to simplify enabling NCCL support for all GPU operations. (#1960)

  • Added TensorFlow implementation of SyncBatchNormalization layer. (#2075)

  • Added hvd.is_initialized() method. (#2020)

  • Added hvd.allgather_object function for TensorFlow, PyTorch, and MXNet. (#2166)

  • Added hvd.broadcast_object function for MXNet. (#2122)

  • Added label_shapes parameter to KerasEstimator and TorchEstimator. (#2140)

  • Added optional modelCheckPoint callback to KerasEstimator params. (#2124)

  • Added ssh_identity_file argument to horovodrun. (#2201)

  • Added support for horovodrun on kubeflow/mpi-job. (#2199)

  • Added Ray integration. (#2218)

Changed

  • Moved horovod.run.runner.run to horovod.run. (#2099)

  • HOROVOD_THREAD_AFFINITY accepts multiple values, one for every Horovod rank (#2131)

  • Migrated build system for native libraries to CMake (#2009)

Deprecated

  • HOROVOD_CCL_BGT_AFFINITY is deprected. Use HOROVOD_THREAD_AFFINITY instead (#2131)

Removed

  • Dropped support for Python 2. (#1954)

  • Dropped support for TensorFlow < 1.15. (#2169)

  • Dropped support for PyTorch < 1.2. ([#2086](#20...

Read more

Hotfix for adding PYTHONPATH to mpirun env

24 Jun 16:44
Compare
Choose a tag to compare

Fixed

  • Added PYTHONPATH to mpirun env. (#2038)

Hotfix for sync batch norm in PyTorch 1.5, mixed precision in TensorFlow 2.2

28 May 22:28
Compare
Choose a tag to compare

Fixed

  • Fixed Sync Batch Norm when using PyTorch 1.5. (#1980)
  • Fixed compatibility with mixed precision Keras policy in TensorFlow 2.2. (#1992)