Skip to content

Releases: ivannz/cplxmodule

Keeping up with modern software

14 Jun 13:05
Compare
Choose a tag to compare

In 2022.06 major release we increases the minimal versions to python>=3.7 and pytorch>=1.8. Although the modern torch now natively supports complex dtypes, no transition has been made to use them as the new backend, and currently we still use the split representation with CR-calculus on top (see the discussions in issues #2 and #21 ).

The following features have been added:

  • FIX: having a dunder-version in the root of the package is a the standard that should be upheld (issue #24)
  • FIX: set the minimal python to 3.7 as pointed out in issue #24
  • FIX: upgraded .utils.spectrum to new native torch complex backend (torch>=1.8)
  • FIX: ensured ONNX support in PR #14
  • ENH: implemented modulus-based maxpooling, requested in issue #17
  • FIX: made .Cplx instances deepcopy-able, fixing issue #18

The following cosmetic or repo-level modifications have been made:

  • DOC: improved docs for .nn.ModReLU indicating the sign-deviation from the original paper proposing it (issue #22)
  • DOC: added a basic in-repo TOC to the main README docs
  • COSMIT: accepted Black a the code style of choice, introduced pre-commit hooks for developers

Complex-valued Neural Networks and Variational Dropout

10 Jan 21:26
Compare
Choose a tag to compare

This is a nominal major release, as it increases the minimal pytorch version from 1.4 to 1.7.

The following features have been added:

  • experimental ONNX support (pr #14)

The version has been bumped from 2020 to 2021 to reflect the new year.

Complex-valued Neural Networks and Variational Dropout

16 Aug 22:35
Compare
Choose a tag to compare

This is a minor mid-month release.

The following features were added:

  • Complex Transposed Convolutions # 8, squeeze/unsqueeze methods for Cplx # 7, and support for view and view_as methods for Cplx # 6 by Hendrik Schröter
  • Tensor-to-Cplx converter layers for special torch format of complex tensors (last dim is exactly 2) see torch.fft

The following bugs were fixed:

  • Shape mismatch in nn.init.cplx_trabelsi_independent_, which prevented it from working properly # 11

Complex-valued Neural Networks and Variational Dropout

09 Jul 21:42
Compare
Choose a tag to compare

This is a minor release, that adds support for 3d real- and complex-valued convolutions and Variational Dropout for them.

Complex-valued networks and Bayesian sparsificaiton methods

27 May 12:03
Compare
Choose a tag to compare

This release includes a fix that makes masked layers work in multi gpu setting, and a update to sparsity accounting.

Complex-valued networks and Bayesian sparsificaiton methods

12 Mar 15:58
Compare
Choose a tag to compare

An extension for torch that adds basic building blocks for complex-valued neural networks with batch normalization and weight initialization. Provides an implementation of Real- and Complex-valued Bayesian sparisification techniques: Variational Dropout and Automatic Relevance Determination. Finally, contains a fully functional package for real- and complex- valued maskable layers.