Skip to content
New issue

Have a question about this project? Sign up for a free GitHub account to open an issue and contact its maintainers and the community.

By clicking “Sign up for GitHub”, you agree to our terms of service and privacy statement. We’ll occasionally send you account related emails.

Already on GitHub? Sign in to your account

What constants to have and which to calculate #3843

Open
mhvk opened this issue Jun 11, 2015 · 21 comments
Open

What constants to have and which to calculate #3843

mhvk opened this issue Jun 11, 2015 · 21 comments

Comments

@mhvk
Copy link
Contributor

mhvk commented Jun 11, 2015

In #3839, we added the Thomson cross-section to constants and two questions were raised which merit some pondering:

@embray wrote:

Certainly no problem with adding this. Though it makes me think there are countless other constants we could be adding easily. I'm almost tempted to make the constants module into just a big text file, from which constants can be lazy-loaded on-demand (so that when you just need one of them it's not necessary to create Constant objects for all of them).

One could perhaps think of using the CODATA list: http://physics.nist.gov/cuu/Constants/Table/allascii.txt

And I wondered, given that this cross-section is calculable from other constants:

It does beg the question, though, on what to calculate and what to store (e.g., we don't have the radiation constant a but do have the stefan-boltzmann constant).

@pllim
Copy link
Member

pllim commented Jun 11, 2015

Not a bad idea to store the constants in a text file, which is easier to read (and maintain) than having to dig into the source code. Overhead of initially loading it on import is negligible.

As for calculated vs hardcoded, if it is calculated only once at the beginning of the session, overhead is negligible too. But calculated result might not be exactly the same as the hardcoded value at very small decimal places, which will be propagated across the rest of the calculations. But then again, calculated value should be more accurate, right?

@embray
Copy link
Member

embray commented Jun 11, 2015

What I wonder is how many of these are simply calculated, versus how many are actually coming from different measurement results? That I don't know.

Another wrinkle to just using this list directly is that we would still have to provide hand-picked variable names for each of them (or at least the ones we want to include in Astropy).

@mhvk
Copy link
Contributor Author

mhvk commented Jun 11, 2015

@embray - thinking a bit more about this, my question about calculability is perhaps a red herring: we can simply be providing an interface to CODATA (with some additional ones). But good point about the names. NIST must have some list of symbols for these units, since they do display them on the web interface, so maybe we can ask for a table that includes that? Fortunately, without the many forms in different units, the list is actually not that long...

@embray
Copy link
Member

embray commented Jun 11, 2015

Hmm, I couldn't find anything obvious listing the (presumably in LaTeX) symbols used for each constant. Maybe I'll look into proposing that to them, since it would be nice. I don't know what extent these are "standardized", though most of the ones I looked at are fairly standard.

@sYnfo
Copy link
Contributor

sYnfo commented Jun 28, 2015

I hope you don't mind me coming into this, as in outsider, but isn't a python file already a "text file"? Couldn't most of the benefits of the CODATA list be gained by programmatically converting it to python, with Constants and such, and perhaps slightly changing the formatting of it to, e.g.:

h = Constant(abbrev = 'h',
             name = "Planck constant",
             value = 6.62606957e-34,
             unit = 'J s',
             uncertainty = 0.00000029e-34,
             reference = 'CODATA 2010',
             system = 'si')

@mhvk
Copy link
Contributor Author

mhvk commented Jun 28, 2015

@sYnfo - the proposal is to do the conversion on-the-fly, so that one's source remains as close as possible to the original and substitution of a newer CODATA file is trivial. The main question really is how to get symbols associated with the descriptive unit names used at least in the ascii version of the codata.

@sYnfo
Copy link
Contributor

sYnfo commented Jun 29, 2015

@mhvk Right, I'm just not sure if it's worth the additional complexity. If new versions were released fairly frequently, then having it in version control and doing on-the-fly conversion would be doubly nice, but from what I can see there was a 5 year gap between the last two versions.

As for associating symbols with descriptive names, My guess would be there's no way around having it stored in astropy, perhaps in the form of

CODATA_assoc = {"electron mass": "m_e",
                ...}

Unless NIST publishes a file with the relations, but I couldn't find anything immediately.

@sYnfo
Copy link
Contributor

sYnfo commented Jun 29, 2015

I suppose the way it is done in SciPy could be an inspiration. [0] Looking at this though, I must wonder if there's perhaps a need for a separate CODATA constants package, that one could depend on, instead of handling this issue on (at least) two different places.

[0] https://github.com/scipy/scipy/blob/master/scipy/constants/codata.py

@mhvk
Copy link
Contributor Author

mhvk commented Jun 29, 2015

@sYnfo - thanks for pointing that out!

@embray
Copy link
Member

embray commented Jun 29, 2015

I didn't even realize SciPy had a constants package. This takes me back to the argument I've made a few times lately that we should just bite the bullet and make SciPy a hard dependency of Astropy. The previous installation concerns are obviated by the existence of scientific Python distributions, that any non-expert user should just be using anyways (not that expert users shouldn't be using them either, I just mean especially).

@embray
Copy link
Member

embray commented Jun 29, 2015

(I'm not totally crazy about how the SciPy module keeps the entire text in the Python module, meaning it's hanging around in memory forever, unnecessarily. Yes I know memory is cheap now and we're only talking a couple k at the most, but seeing things like that still makes me uncomfortable, when it could just as well hang out in a text file that's read once and forgotten.)

@mhvk
Copy link
Contributor Author

mhvk commented Jun 29, 2015

Agreed that the approach is not ideal, but especially if scipy does become a dependency, it would be worth our wiles to just send a PR their way. I'd also like to move units into scipy eventually...

@sYnfo
Copy link
Contributor

sYnfo commented Jun 29, 2015

@embray I guess someone could make the counterargument that reading it from the file will slow down startup ever so slightly. You could just del the reference to the string instead, but this all just feels like a premature optimization, I imagine there are better thing to optimize in SciPy.

@embray
Copy link
Member

embray commented Jun 29, 2015

Well, yes, I was just picking nits.

@mhvk
Copy link
Contributor Author

mhvk commented Oct 13, 2015

@embray wrote (https://github.com/astropy/astropy/pull/4229/files#r41881127) in #4229:

One other thing we might want to consider, not for this PR, but in general, is maybe allowing constants to be defined in different units--override the .to method to give hard-coded values for the constant in some given units rather than performing the (sometimes inaccurate) conversion.

For example the main reason for the discrepancies above is in our conversion of h (which we have defined in J s) to eV s, which ends up off a bit from the value given in CODATA2010 (albeit within the uncertainty, which wouldn't matter except when we then go and use it to derive other constants).

@embray
Copy link
Member

embray commented Oct 13, 2015

I think for certain known constants we should definitely be doing that, especially since we're not doing any sort of uncertainty propagation when we convert our constants from one unit to another.

@mhvk
Copy link
Contributor Author

mhvk commented Oct 13, 2015

I do like very much how it will let us just parse the whole CODATA list, yet not ending up with a very large set of unhandily named constants. (Note that CODATA has covariances, so in principle we can even include those -- especially if we get to use Variable instances -- #3715).

@bsipocz
Copy link
Member

bsipocz commented Jun 16, 2017

@mhvk - Would the new approach of versioning constants from several sources good enough as a solution for this issue?

@bsipocz bsipocz added the Close? label Jun 16, 2017
@mhvk
Copy link
Contributor Author

mhvk commented Jun 16, 2017

I think the link to scipy and the somewhat arbitrary nature of the constants that we include are still relevant. I think it now is low priority, though, and not low effort, so will relabel accordingly

@pllim
Copy link
Member

pllim commented May 11, 2020

Is this still an issue 3 years after the last comment?

@mhvk
Copy link
Contributor Author

mhvk commented May 11, 2020

I guess there is no pressing need, though I think the issue still stands... Ideally, one would calculate some constants from others.

Sign up for free to join this conversation on GitHub. Already have an account? Sign in to comment
Projects
None yet
Development

No branches or pull requests

5 participants