Skip to content

Releases: mapillary/inplace_abn

Feature parity with Pytorch's BatchNormNd, code clean-up

03 Sep 14:23
Compare
Choose a tag to compare

This release updates ABN, InPlaceABN and InPlaceABNSync to feature parity with recent versions of Pytorch's BatchNormNd layers:

  • Add a track_running_stats parameter to enable / disable computation of running statistics independently from the layer's training state
  • Add a num_batches_tracked buffer, and allow passing momentum=None to support cumulative moving average for tracking running stats instead of exponential moving average
  • As a side-effect, now support loading parameters from standard BatchNorm without work-arounds. Still, if the loaded parameters contain negative weight elements the output will differ compared to standard BatchNorm

Additional changes:

  • Fix backward pass in eval mode: it was not properly accounting for the activation function
  • Refactor code to follow more sensible formatting standards
  • Add type annotations
  • Improve docstrings
  • Update installation instructions, pointing to the PyPI package

Pytorch 1.5.0+ compatibility

22 Apr 13:53
Compare
Choose a tag to compare
v1.0.12

Changes for Pytorch 1.5.0

Pytorch 1.4.0+ compatibility

27 Jan 14:05
Compare
Choose a tag to compare
v1.0.11

Compatibility with Pytorch 1.4, fix AT_CHECK warnings

Optimized backward pass using less temporary memory

08 Jan 11:19
Compare
Choose a tag to compare

This release contains an improved implementation of the fix for the backward pass in v1.0.9 which uses less temporary memory at no additional computational cost.

Avoid overwriting input tensors during backward pass

07 Jan 14:39
Compare
Choose a tag to compare

In previous versions, both the input/output tensor y and the gradient tensor dy were overwritten during the backward pass. This was causing issues with some network topologies, producing wrong gradients.

To fix this issue, a pair of temporary tensors is now created during the backward pass to hold the results of intermediate computations. This change will increase the amount of temporary memory required, meaning that in some cases where GPU memory utilization was already very close to the limit OOM errors might now occur. An alternative, more complex fix is also possible at the expense of additional computational costs. We are evaluating the impact of these changes and will provide updates in a future release.

Pytorch 1.3.0+ compatibility

22 Nov 14:52
826d7c9
Compare
Choose a tag to compare
v1.0.8

Update README to reference latest release

Bugfix: compatibility with CUDA 10.0

04 Sep 16:45
Compare
Choose a tag to compare

This release fixes a compatibility issue with CUDA 10.0, resulting in compilation errors in some cases.

Bugfix: compilation on systems without a CUDA enabled device

23 Aug 08:30
Compare
Choose a tag to compare

At compile time, when determining whether to enable CUDA support, we now base the decision on the Pytorch version installed:

  • If a CUDA-enabled Pytorch is detected, we attempt to compile CUDA support
  • If a CPU-only Pytorch is detected, we disable CUDA support

CPU-only support

20 Aug 14:54
Compare
Choose a tag to compare

InPlace-ABN can now be compiled and used without CUDA. Note that Synchronized InPlace-ABN is still only supported in conjunction with CUDA-enabled Pytorch.

Compatibility with post-Pytorch v1.0.0 BN state dicts

14 Aug 08:29
Compare
Choose a tag to compare

State dicts from standard BatchNorm layers trained with Pytorch v1.0.0 or newer can now be properly loaded by ABN, InPlaceABN and InPlaceABNSync.