You signed in with another tab or window. Reload to refresh your session.You signed out in another tab or window. Reload to refresh your session.You switched accounts on another tab or window. Reload to refresh your session.Dismiss alert
As seen in the example for pyhf.set_backend the default precision for the numpy backend is 64b while the default precision for TensorFlow and PyTorch is 32b.
Given that there are known differences between 64b and 32b it is probably worth reconsidering / discussing if we should have a default precision of 64b for all backends. My guess is that it is non-intuitive that switching from NumPy to Torch changes the precision.
This is somewhat related to Issue #981 and was put back into my mind by @dguest and @nhartman94 (c.f. Discussion #1397) (comments welcome from them both).
The text was updated successfully, but these errors were encountered:
I agree. I think we had 32b at first since that was nominally the primary support when we first started with these backends. I see no reason why we shouldn't harmonize the defaults.
Question
As seen in the example for
pyhf.set_backend
the default precision for thenumpy
backend is64b
while the default precision for TensorFlow and PyTorch is32b
.pyhf/src/pyhf/tensor/numpy_backend.py
Line 44 in 98bb222
pyhf/src/pyhf/tensor/jax_backend.py
Line 58 in 98bb222
pyhf/src/pyhf/tensor/tensorflow_backend.py
Line 16 in 98bb222
pyhf/src/pyhf/tensor/pytorch_backend.py
Line 18 in 98bb222
Given that there are known differences between
64b
and32b
it is probably worth reconsidering / discussing if we should have a default precision of64b
for all backends. My guess is that it is non-intuitive that switching from NumPy to Torch changes the precision.This is somewhat related to Issue #981 and was put back into my mind by @dguest and @nhartman94 (c.f. Discussion #1397) (comments welcome from them both).
The text was updated successfully, but these errors were encountered: