New issue
Have a question about this project? Sign up for a free GitHub account to open an issue and contact its maintainers and the community.
By clicking “Sign up for GitHub”, you agree to our terms of service and privacy statement. We’ll occasionally send you account related emails.
Already on GitHub? Sign in to your account
What constants to have and which to calculate #3843
Comments
Not a bad idea to store the constants in a text file, which is easier to read (and maintain) than having to dig into the source code. Overhead of initially loading it on import is negligible. As for calculated vs hardcoded, if it is calculated only once at the beginning of the session, overhead is negligible too. But calculated result might not be exactly the same as the hardcoded value at very small decimal places, which will be propagated across the rest of the calculations. But then again, calculated value should be more accurate, right? |
What I wonder is how many of these are simply calculated, versus how many are actually coming from different measurement results? That I don't know. Another wrinkle to just using this list directly is that we would still have to provide hand-picked variable names for each of them (or at least the ones we want to include in Astropy). |
@embray - thinking a bit more about this, my question about calculability is perhaps a red herring: we can simply be providing an interface to CODATA (with some additional ones). But good point about the names. NIST must have some list of symbols for these units, since they do display them on the web interface, so maybe we can ask for a table that includes that? Fortunately, without the many forms in different units, the list is actually not that long... |
Hmm, I couldn't find anything obvious listing the (presumably in LaTeX) symbols used for each constant. Maybe I'll look into proposing that to them, since it would be nice. I don't know what extent these are "standardized", though most of the ones I looked at are fairly standard. |
I hope you don't mind me coming into this, as in outsider, but isn't a python file already a "text file"? Couldn't most of the benefits of the CODATA list be gained by programmatically converting it to python, with Constants and such, and perhaps slightly changing the formatting of it to, e.g.:
|
@sYnfo - the proposal is to do the conversion on-the-fly, so that one's source remains as close as possible to the original and substitution of a newer CODATA file is trivial. The main question really is how to get symbols associated with the descriptive unit names used at least in the ascii version of the codata. |
@mhvk Right, I'm just not sure if it's worth the additional complexity. If new versions were released fairly frequently, then having it in version control and doing on-the-fly conversion would be doubly nice, but from what I can see there was a 5 year gap between the last two versions. As for associating symbols with descriptive names, My guess would be there's no way around having it stored in astropy, perhaps in the form of
Unless NIST publishes a file with the relations, but I couldn't find anything immediately. |
I suppose the way it is done in SciPy could be an inspiration. [0] Looking at this though, I must wonder if there's perhaps a need for a separate CODATA constants package, that one could depend on, instead of handling this issue on (at least) two different places. [0] https://github.com/scipy/scipy/blob/master/scipy/constants/codata.py |
@sYnfo - thanks for pointing that out! |
I didn't even realize SciPy had a constants package. This takes me back to the argument I've made a few times lately that we should just bite the bullet and make SciPy a hard dependency of Astropy. The previous installation concerns are obviated by the existence of scientific Python distributions, that any non-expert user should just be using anyways (not that expert users shouldn't be using them either, I just mean especially). |
(I'm not totally crazy about how the SciPy module keeps the entire text in the Python module, meaning it's hanging around in memory forever, unnecessarily. Yes I know memory is cheap now and we're only talking a couple k at the most, but seeing things like that still makes me uncomfortable, when it could just as well hang out in a text file that's read once and forgotten.) |
Agreed that the approach is not ideal, but especially if scipy does become a dependency, it would be worth our wiles to just send a PR their way. I'd also like to move |
@embray I guess someone could make the counterargument that reading it from the file will slow down startup ever so slightly. You could just |
Well, yes, I was just picking nits. |
@embray wrote (https://github.com/astropy/astropy/pull/4229/files#r41881127) in #4229:
|
I think for certain known constants we should definitely be doing that, especially since we're not doing any sort of uncertainty propagation when we convert our constants from one unit to another. |
I do like very much how it will let us just parse the whole CODATA list, yet not ending up with a very large set of unhandily named constants. (Note that CODATA has covariances, so in principle we can even include those -- especially if we get to use |
@mhvk - Would the new approach of versioning constants from several sources good enough as a solution for this issue? |
I think the link to |
Is this still an issue 3 years after the last comment? |
I guess there is no pressing need, though I think the issue still stands... Ideally, one would calculate some constants from others. |
In #3839, we added the Thomson cross-section to constants and two questions were raised which merit some pondering:
@embray wrote:
One could perhaps think of using the CODATA list: http://physics.nist.gov/cuu/Constants/Table/allascii.txt
And I wondered, given that this cross-section is calculable from other constants:
The text was updated successfully, but these errors were encountered: