AmpForm 0.12.0
Release 0.12.0
See all documentation for this version here.
Major interface changes: kinematic variables are not computed with NumPy anymore. Instead, AmpForm provides an expression for those conversions, so that the conversion can be done with different back-ends. See #177 for more info.
💡 New features
Kinematic variables are now expressed symbolically (#177)
Closes #174
This PR implements TR-011. It merges the ampform.data
module into ampform.kinematics
. Most notably, the recursive helicity angles are now expressed as SymPy expressions, so that they can be computed with different computational back-ends.
AmpForm supports Python 3.10 (#172)
⚠️ Interface
implement_doit_method decorator does not take arguments anymore (#178)
The @implement_doit_method
decorator was using one inline function layer too many. So now you have to write
@implement_doit_method
class MyExpr(UnevaluatedExpression):
...
instead of
@implement_doit_method()
class MyExpr(UnevaluatedExpression):
...
Renamed CoupledWidth to EnergyDependentWidth (#150)
The CoupledWidth
class has been renamed to EnergyDependentWidth
.
CoupledWidth
was a bit confusing. More common terms are mass/energy dependent width or running width.
Added HelicityModel.kinematics, removed HelicityModel.adapter (#177)
Several changes to HelicityModel
due to #177. Most notably, its adapter
attribute has been removed in favour of kinematics
, which is a dict
of helicity angle Symbol
s to Expr
s in terms of four-momentum symbols
🐛 Bug fixes
Minimum SymPy version set to v1.8 (#185)
Google Colab comes with SymPy v1.7, which doesn't have the module sympy.printing.numpy
. So at least SymPy v1.8 is required.
🔨 Internal maintenance
Importing ampform is about twice as fast now (#189)
With ComPWA/qrules#130 and f08f1f0, import ampform
is about 2x as fast
_numpycode() printer methods now use SymPy's module_imports (#187)
Classes like ArrayAxisSum
were specifically using statements like
printer.module_imports["numpy"].add("sum")
This is problematic in TensorWaves, which would like to see "jnp"
and "tnp"
there (the NumPy interfaces of JAX and TensorFlow). This PR makes this possible.
Another major improvements: einsum
in ArrayMultiplication
is used in such a way that transpose
is not necessary anymore. In addition, an ellipsis return statement is specificied ("...ij,...j->...i"
instead of "...ij,...j"
), which tf.einsum
requires.
📝 Documentation
Links to Binder and Google Colab are now pinned for each version (#179)
Branch name in conf.py
is now extracted from Read the Docs if possible. This way, all Binder/Colab links lead to the corresponding branch or tag of the repo.