-
Notifications
You must be signed in to change notification settings - Fork 17
New issue
Have a question about this project? Sign up for a free GitHub account to open an issue and contact its maintainers and the community.
By clicking “Sign up for GitHub”, you agree to our terms of service and privacy statement. We’ll occasionally send you account related emails.
Already on GitHub? Sign in to your account
Proof of concept: Initialise all models as ThreeFlavor #335
Proof of concept: Initialise all models as ThreeFlavor #335
Conversation
This is a proof of concept with minimal changes. With this, `pytest -m 'not snowglobes'` should succeed.
Odrzywolek_2010 didn’t require any changes☺️
There was a problem hiding this comment.
Choose a reason for hiding this comment
The reason will be displayed to describe this comment to others. Learn more.
Thank you for preparing this example.
It shows that really the transitioning all the model loaders to ThreeFlavor is not that hard, yes.
However
I wouldn't call this compact:
Already for CCSN models you have to make changes in:
- PinchedModel
- GarchingArchiveModel
- Kuroda_2020
- Fornax_2019 (several places)
- Fornax_2021
- Fornax_2022
Of course it's mostly repeating code, with slight variations. And of course it's there because the Fornax_2019 code itself is quite large. But with these changes it becomes even larger.
These changes are harder to test:
Any code is error prone and needs to be tested.
To validate these changes you need to test each of these models - that they still produce the same result, as before, when you convert the results back to TwoFlavor (and for this you'll have to write another conversion by hand, since you ditch ThreeFlavor>>TwoFlavor
matrices).
With conversion matrices you can make proper unit testing of the matrices and their multiplication with objects - and be sure that all the conversions are correct.
In summary
I agree that this is doable and not too hard.
But what is the benefit of this approach? What do we gain?
Does it make the code cleaner, easier to maintain or to test?
Does it make any difference to the final user, or implementing FlavorTransformations, or further development?
Jost, some models distinguish NU_X and NU_X_BAR. |
@jpkneller: Good catch, thanks! Fixed now.
Yes, though that is a fundamental fact of SNEWPY’s current implementation: It is not possible to make such a change solely in the
The reason for the repetition in my Fornax_2019 changes is that the Fornax_2019 code itself already was repetitive there; and I wanted to keep the changes minimal and obvious for this proof of concept. Regarding testing In principle, I think this combination of tests would normally be sufficient; in practice, while the integration tests don’t run, some replacement would be highly desirable. I’ve therefore spent the evening extending the existing initialisation tests to
The result: In all cases the numerical values are identical. ✅ The test code is a bit hacky, unfortunately; but if you want, I can write it up and share it on Tuesday. (I’m on leave Fri/Mon.)
In my opinion, yes. |
There was a problem hiding this comment.
Choose a reason for hiding this comment
The reason will be displayed to describe this comment to others. Learn more.
This looks good. We'll just have to remember to remove the xfails
For demonstration purposes, following the discussion in #309.
The changes are fairly small and well contained.
For most models (presn and ccsn), we already had code to map the columns in the input files (usually NU_E, NU_E_BAR and NU_X, i.e. a “OnePointFiveFlavor scheme”) to the TwoFlavor scheme; I’ve simply updated these existing mappings. With this, all tests run by
pytest -m 'not snowglobes'
succeed.Of course, the
FlavorTransformation
s are currently hardcoded to the TwoFlavor scheme; so any code that uses these transformations (mainly viaSupernovaModel.get_transformed_flux
) cannot currently switch to ThreeFlavor scheme. This is already changing in #308; so for the purposes of this proof of concept, I’ve marked those as xfails.