You signed in with another tab or window. Reload to refresh your session.You signed out in another tab or window. Reload to refresh your session.You switched accounts on another tab or window. Reload to refresh your session.Dismiss alert
Synchronize gradients in manual optimization with DDPStrategy(static_graph=True) (#21251)
* fix: synchronize gradients in manual optimization with DDPStrategy(static_graph=True). Ensure gradients are reduced correctly when using manual optimization and DDP with static_graph enabled.
* Adds regression test to cover all combinations of optimization/static_graph.
* Initialize _pl_static_graph_delay_done attribute properly
* changelog
---------
Co-authored-by: Nicki Skafte Detlefsen <skaftenicki@gmail.com>
Copy file name to clipboardExpand all lines: src/lightning/pytorch/CHANGELOG.md
+4Lines changed: 4 additions & 0 deletions
Display the source diff
Display the rich diff
Original file line number
Diff line number
Diff line change
@@ -61,6 +61,10 @@ The format is based on [Keep a Changelog](http://keepachangelog.com/en/1.0.0/).
61
61
- Fixed how `ThroughputMonitor` calculated training time ([#21291](https://github.com/Lightning-AI/pytorch-lightning/pull/21291))
62
62
63
63
64
+
- Fixed synchronization of gradients in manual optimization with `DDPStrategy(static_graph=True)` ([#21251](https://github.com/Lightning-AI/pytorch-lightning/pull/21251))
0 commit comments