Skip to content

Feature Request: add an additional argument to auto_optim to allow for gradient accumulation #2168

@sandylaker

Description

@sandylaker

When we use horovod backend and perform gradient accumulation, we get the following error:
AssertionError: Gradients were computed more than backward_passes_per_step times before call to step(). Increase backward_passes_per_step to accumulate gradients locally.

Thus, we need to change the default argument backward_passes_per_step of horovod.DistributedOptimizer to enable gradient accumulation in the distributed setting. To do so, we can add this argument to ignite.distributed.auto_optim.

Metadata

Metadata

Assignees

Type

No type

Projects

No projects

Milestone

No milestone

Relationships

None yet

Development

No branches or pull requests

Issue actions