Skip to content
New issue

Have a question about this project? Sign up for a free GitHub account to open an issue and contact its maintainers and the community.

By clicking “Sign up for GitHub”, you agree to our terms of service and privacy statement. We’ll occasionally send you account related emails.

Already on GitHub? Sign in to your account

Uniform default precision for all backends? #1398

Closed
matthewfeickert opened this issue Apr 5, 2021 · 4 comments · Fixed by #1400
Closed

Uniform default precision for all backends? #1398

matthewfeickert opened this issue Apr 5, 2021 · 4 comments · Fixed by #1400
Assignees
Labels
perf A code change that improves performance question Further information is requested

Comments

@matthewfeickert
Copy link
Member

matthewfeickert commented Apr 5, 2021

Question

As seen in the example for pyhf.set_backend the default precision for the numpy backend is 64b while the default precision for TensorFlow and PyTorch is 32b.

self.precision = kwargs.get('precision', '64b')

self.precision = kwargs.get('precision', '64b')

self.precision = kwargs.get('precision', '32b')

self.precision = kwargs.get('precision', '32b')

Given that there are known differences between 64b and 32b it is probably worth reconsidering / discussing if we should have a default precision of 64b for all backends. My guess is that it is non-intuitive that switching from NumPy to Torch changes the precision.

This is somewhat related to Issue #981 and was put back into my mind by @dguest and @nhartman94 (c.f. Discussion #1397) (comments welcome from them both).

@matthewfeickert matthewfeickert added question Further information is requested perf A code change that improves performance labels Apr 5, 2021
@matthewfeickert matthewfeickert changed the title Uniform precision for all backends? Uniform default precision for all backends? Apr 5, 2021
@matthewfeickert matthewfeickert self-assigned this Apr 5, 2021
@matthewfeickert
Copy link
Member Author

If @lukasheinrich and @kratsg agree here I have a branch with the changes ready for a PR (after PR #1399 goes in).

@lukasheinrich
Copy link
Contributor

yeah I agree!

@kratsg
Copy link
Contributor

kratsg commented Apr 6, 2021

I agree. I think we had 32b at first since that was nominally the primary support when we first started with these backends. I see no reason why we shouldn't harmonize the defaults.

@matthewfeickert
Copy link
Member Author

I agree. I think we had 32b at first since that was nominally the primary support when we first started with these backends.

Yeah this was my memory too.

Sign up for free to join this conversation on GitHub. Already have an account? Sign in to comment
Labels
perf A code change that improves performance question Further information is requested
Projects
None yet
Development

Successfully merging a pull request may close this issue.

3 participants