Releases: e3nn/e3nn
Releases · e3nn/e3nn
2022-12-12
Added
- L=12 spherical harmonics
Fixed
TensorProduct.visualize
now works even if the TP is on the GPU.- Github actions only trigger a push to coveralls if the corresponding token is set in github secrets.
- Batchnorm
2022-04-13
[0.5.0] - 2022-04-13
Added
- Sparse Voxel Convolution
- Clebsch-Gordan coefficients are computed via a change of basis from the complex to real basis. (see #341)
o3
,nn
andio
are accessible throughe3nn
. For instancee3nn.o3.rand_axis_angle
.
Changed
- Since now the code is no more tested against
torch==1.8.0
, only tested againsttorch>=1.10.0
Fixed
wigner_3j
now always returns a contiguous copy regardless of dtype or device
2021-12-15
[0.4.4] - 2021-12-15
Fixed
- Remove
CartesianTensor._rtp
. Instead recompute theReducedTensorProduct
everytime. The user can save theReducedTensorProduct
to avoid creating it each time. *equivariance_error
no longer keeps around unneeded autograd graphsCartesianTensor
buildsReducedTensorProduct
with correct device/dtype when called without one
Added
- Created module for reflected imports allowing for nice syntax for creating
irreps
, e.g.from e3nn.o3.irreps import l3o # same as Irreps("o3")
- Add
uvu<v
mode forTensorProduct
. Compute only the upper triangular part of theuv
terms. - (beta)
TensorSquare
. computesx \otimes x
and decompose it. *equivariance_error
now tell you which arguments had which error
Changed
- Give up the support of python 3.6, set
python_requires='>=3.7'
in setup - Optimize a little bit
ReducedTensorProduct
: solve linear system only once per irrep instead of 2L+1 times. - Do not scale line width by
path_weight
inTensorProduct.visualize
*equivariance_error
now transforms its inputs in float64 by default, regardless of the dtype used for the calculation itself
2021-11-18
[0.4.3] - 2021-11-18
Fixed
ReducedTensorProduct
: replace QR decomposition byorthonormalize
the projectorX.T @ X
.
This keepsReducedTensorProduct
deterministic because the projectors andorthonormalize
are both deterministic.
The output oforthonormalize
apears also to be highly sparse (luckily).
2021-11-08
[0.4.2] - 2021-11-08
This release coupled with the release of opt-einsum-fx=0.1.4
aim to fix slowness in the instantiation of TensorProduct
.
The two main change that improved the instantiation time are
- Turning off the compilation of
TensorProduct.right
by default - Replacing actual computation of
torch.einsum
andtorch.tensordot
by prediction of their output shape in the tracer used byopt-einsum-fx
to collect tensor shapes
Added
irrep_normalization
andpath_normalization
forTensorProduct
compile_right
flag toTensorProduct
- Add new global flag
jit_script_fx
to optionally turn offtorch.jit.script
of fx code
2021-10-29
[0.4.1] - 2021-10-29
Added
- Add
to_cartesian()
toCartesianTensor
Fixed
- make it work with
pytorch 1.10.0
2021-10-05
[0.4.0] - 2021-10-05
Changed
- Breaking change. normalization constants for
TensorProduct
andLinear
. NowLinear(2x0e + 7x0e, 0e)
is equivalent toLinear(9x0e, 0e)
. Models with inhomogeneous multiplicities will be affected by this change!
Fixed
- remove
profiler.record_function
calls that caused troubles with torchscript - the home made implementation of
radius_graph
was ignoring the argumentr_max
2021-08-27
[0.3.5] - 2021-08-27
Fixed
Extract
usesCodeGenMixin
to avoid strange recursion errors during training- Add missing call to
normalize
inaxis_angle_to_quaternion
2021-08-20
[0.3.4] - 2021-08-20
Fixed
ReducedTensorProducts
:normalization
andfilter_ir_mid
where not properly propagated through the recusive calls, this bug has no effects if the default values where used- Use
torch.linalg.eigh
instead of the deprecatedtorch.symeig
Added
- (dev only) Pre-commit hooks that run pylint and flake8. These catch some common mistakes/style issues.
- classes to do
SO(3)
Grid transform (not fast) and Activation function using it - Add
f_in
andf_out
too3.Linear
PBC
guide in the doc
2021-06-21
[0.3.3] - 2021-06-21
Changed
FullyConnectedNet
is now atorch.nn.Sequential
Fixed
BatchNorm
was not equivariant for pseudo-scalars
Added
biases
argument too3.Linear
nn.models.v2106
:MessagePassing
takes a sequence of irrepsnn.models.v2106
:Convolution
inpired from Batch Normalization Biases Residual Blocks Towards the Identity Function in Deep Networks