Skip to content

Releases: e3nn/e3nn

2022-12-12

12 Dec 21:42
Compare
Choose a tag to compare

Added

  • L=12 spherical harmonics

Fixed

  • TensorProduct.visualize now works even if the TP is on the GPU.
  • Github actions only trigger a push to coveralls if the corresponding token is set in github secrets.
  • Batchnorm

2022-04-13

13 Apr 19:24
Compare
Choose a tag to compare

[0.5.0] - 2022-04-13

Added

  • Sparse Voxel Convolution
  • Clebsch-Gordan coefficients are computed via a change of basis from the complex to real basis. (see #341)
  • o3, nn and io are accessible through e3nn. For instance e3nn.o3.rand_axis_angle.

Changed

  • Since now the code is no more tested against torch==1.8.0, only tested against torch>=1.10.0

Fixed

  • wigner_3j now always returns a contiguous copy regardless of dtype or device

2021-12-15

16 Dec 08:48
Compare
Choose a tag to compare

[0.4.4] - 2021-12-15

Fixed

  • Remove CartesianTensor._rtp. Instead recompute the ReducedTensorProduct everytime. The user can save the ReducedTensorProduct to avoid creating it each time.
  • *equivariance_error no longer keeps around unneeded autograd graphs
  • CartesianTensor builds ReducedTensorProduct with correct device/dtype when called without one

Added

  • Created module for reflected imports allowing for nice syntax for creating irreps, e.g. from e3nn.o3.irreps import l3o # same as Irreps("o3")
  • Add uvu<v mode for TensorProduct. Compute only the upper triangular part of the uv terms.
  • (beta) TensorSquare. computes x \otimes x and decompose it.
  • *equivariance_error now tell you which arguments had which error

Changed

  • Give up the support of python 3.6, set python_requires='>=3.7' in setup
  • Optimize a little bit ReducedTensorProduct: solve linear system only once per irrep instead of 2L+1 times.
  • Do not scale line width by path_weight in TensorProduct.visualize
  • *equivariance_error now transforms its inputs in float64 by default, regardless of the dtype used for the calculation itself

2021-11-18

18 Nov 11:21
Compare
Choose a tag to compare

[0.4.3] - 2021-11-18

Fixed

  • ReducedTensorProduct: replace QR decomposition by orthonormalize the projector X.T @ X.
    This keeps ReducedTensorProduct deterministic because the projectors and orthonormalize are both deterministic.
    The output of orthonormalize apears also to be highly sparse (luckily).

2021-11-08

08 Nov 09:41
Compare
Choose a tag to compare

[0.4.2] - 2021-11-08

This release coupled with the release of opt-einsum-fx=0.1.4 aim to fix slowness in the instantiation of TensorProduct.
The two main change that improved the instantiation time are

  • Turning off the compilation of TensorProduct.right by default
  • Replacing actual computation of torch.einsum and torch.tensordot by prediction of their output shape in the tracer used by opt-einsum-fx to collect tensor shapes

Added

  • irrep_normalization and path_normalization for TensorProduct
  • compile_right flag to TensorProduct
  • Add new global flag jit_script_fx to optionally turn off torch.jit.script of fx code

2021-10-29

30 Oct 15:46
Compare
Choose a tag to compare

[0.4.1] - 2021-10-29

Added

  • Add to_cartesian() to CartesianTensor

Fixed

  • make it work with pytorch 1.10.0

2021-10-05

05 Oct 16:11
Compare
Choose a tag to compare

[0.4.0] - 2021-10-05

Changed

  • Breaking change. normalization constants for TensorProduct and Linear. Now Linear(2x0e + 7x0e, 0e) is equivalent to Linear(9x0e, 0e). Models with inhomogeneous multiplicities will be affected by this change!

Fixed

  • remove profiler.record_function calls that caused troubles with torchscript
  • the home made implementation of radius_graph was ignoring the argument r_max

2021-08-27

27 Aug 15:18
Compare
Choose a tag to compare

[0.3.5] - 2021-08-27

Fixed

  • Extract uses CodeGenMixin to avoid strange recursion errors during training
  • Add missing call to normalize in axis_angle_to_quaternion

2021-08-20

20 Aug 16:54
Compare
Choose a tag to compare

[0.3.4] - 2021-08-20

Fixed

  • ReducedTensorProducts: normalization and filter_ir_mid where not properly propagated through the recusive calls, this bug has no effects if the default values where used
  • Use torch.linalg.eigh instead of the deprecated torch.symeig

Added

  • (dev only) Pre-commit hooks that run pylint and flake8. These catch some common mistakes/style issues.
  • classes to do SO(3) Grid transform (not fast) and Activation function using it
  • Add f_in and f_out to o3.Linear
  • PBC guide in the doc

2021-06-21

21 Jun 12:05
Compare
Choose a tag to compare

[0.3.3] - 2021-06-21

Changed

  • FullyConnectedNet is now a torch.nn.Sequential

Fixed

  • BatchNorm was not equivariant for pseudo-scalars

Added

  • biases argument to o3.Linear
  • nn.models.v2106: MessagePassing takes a sequence of irreps
  • nn.models.v2106: Convolution inpired from Batch Normalization Biases Residual Blocks Towards the Identity Function in Deep Networks