Skip to content
New issue

Have a question about this project? Sign up for a free GitHub account to open an issue and contact its maintainers and the community.

By clicking “Sign up for GitHub”, you agree to our terms of service and privacy statement. We’ll occasionally send you account related emails.

Already on GitHub? Sign in to your account

Create a NeutrinoFlux class as a container for neutrino signal, produced by a supernova model #214

Closed
Sheshuk opened this issue Sep 29, 2022 · 8 comments · Fixed by #236
Closed
Assignees
Labels
enhancement New feature or request

Comments

@Sheshuk
Copy link
Contributor

Sheshuk commented Sep 29, 2022

This is a discussion for a feature if we agree on #213.

What I want

A NeutrinoFlux class, returned by SupernovaModel.get_initial_flux method.

An object which contains all the information about the neutrino signal from supernova model.
This information should be sufficient to

  • Integrate the flux over time and/or energy - so it should have T and E points where it was calculated
  • Apply an arbitrary neutrino mixing - so it needs to contain flux for all neutrino flavors

What I suggest

  1. I suggest using the astropy.Quantity classes to keep track of all the dimensions.
  2. I suggest to keep the flux for all flavors in a single array with dimensions (N_flavors,N_times,N_energies) - this will make it easy to apply any transformation matrix, by just using matrix multiplication

So, something like this (pseudocode):

class NeutrinoFlux:
    def __init__(self, data: Quantity[1/s/MeV], axes: Dict[str, Quantity]):
                 """ axes could be a dict{
                     'Flavors': list[Flavor], 
                     'E': Quantity[MeV],
                     'T': Quantity[s] }
                 """
       self.data = data
       self.axes = axes
        
    def apply(self, flavor_xform: FlavorTransformation) -> NeutrinoFlux:
        #apply the given mixing an return the transformed flux
        flux_transformed = flavor_xform @ self.flux #we can use matrix multiplication
        return NeutrinoFlux(flux_transformed, self.axes) 

Benefits

  1. Apply mixing having only FlavorTransformation and NeutrinoFlux classes (not coupled to SupernovaModel) - which is logical, because mixing acts on the neutrino signal rather than the supernova itself.
  2. Pass the NeutrinoFlux to the SNOwGLoBES/SimpleRate calculation directly.

So instead of (or as alternative to) calling (pseudocode)

files = generate_time_series(model, transformation, distance, E, t, ...)
results = simulate(files, detector, ...)

one could do

flux = model.get_initial_flux(distance, E, t)
flux = flux.apply(transformation) 
results = rate_calculator.run(flux, detector, ...)

and have access to all the intermediate results IN MEMORY without reading additional files

@JostMigenda
Copy link
Member

Initial thoughts:

I’m sceptical about the axes: Dict[str, Quantity] argument. If users try to employ any axes other than flavor, time and energy (in this exact order), matrix multiplication with a FlavorTransformation will return nonsense values. Would __init__(self, data: Quantity[1/s/MeV], times: Quantity[s], energies: Quantity[MeV]) work better? (The Flavor axis will always be identical the same 4/6 flavours, so we don’t need that as an explicit argument.)

Regarding Benefit 2: That sounds like you’re trying to build your own implementation of (parts of) snewpy.snowglobes?
I agree it’s desirable in some use cases to run everything in memory without writing to & reading from fluence files in between. After the GLoBES-ectomy in #193 we don’t need those files any more (though in some use cases it might still be useful to have them as a cache); so maybe it’s better to modify snewpy.snowglobes to keep results in memory by default, rather than modifying snewpy (with this change here, and the one suggested in #216) to make it easier for you to implement something like that manually?

@Sheshuk
Copy link
Contributor Author

Sheshuk commented Oct 3, 2022

Regarding Benefit 2: That sounds like you’re trying to build your own implementation of (parts of) snewpy.snowglobes?

Yes. IMO the current snewpy.snowglobes interface has many downsides:

  1. Misleading: why are generate_time_series/fluence part of the snewpy.snowglobes module? What if I want the models fluence without feeding it to snowglobes later?
  2. Not expressive. Just try reading the code in the examples I provide in Benefit2 with the eyes of user not familiar with the implementation details. What does it do?
  3. User has no control over the intermediate steps. What if I want to additionally transform the fluxes before feeding them to snowglobes? Say, double the flux. Should I read the files, multiply by two, and then write again to the new set of files? Also, what if I want to plot the neutrino flux? Should I also read the files, which has a format defined inside generate_*?
  4. Not full: what if I want to pass additional parameters to SimpleRate.run - like the detector material? If your detector name is not recognized in guess_material you just can't use snewpy.snowglobes.simulate.
  5. Not OOP. That's not a fault by itself, but the interface would be much more clear if we move the methods to corresponding objects, not just have a bunch of global functions.

But I'm not proposing to change this interface here. I propose to keep the interface for the compatibility, alright, but I propose to move the existing functionality into the objects, and make the inner implementation of these generate* and simulate functions use these objects.

@Sheshuk
Copy link
Contributor Author

Sheshuk commented Oct 3, 2022

so maybe it’s better to modify snewpy.snowglobes to keep results in memory by default

Wouldn't that change the interface (currently generate_* returning filename, and simulate receiving it as an argument)?

to make it easier for you to implement something like that manually

By implementing manually you mean in my user script / jupyter notebook, and don't touch the snewpy code?

@Sheshuk
Copy link
Contributor Author

Sheshuk commented Oct 7, 2022

@JostMigenda, it seems to me that you consider my use cases as something exotic, and my suggestions somehow disruptive for the "normal" usage of snewpy.
But let me illustrate a little bit my motivation with two use cases.

1. Using new detector configuration

  • I want to calculate the event rates for the Baikal experiment.
  • It is not present in current SNOwGLoBES configuration, so I just add a line in $SNOWGLOBES/detector_configurations.dat:
baikal_1clu        600.     0.1111

(similar to the icecube, but with different mass).

  • Problem❗: I can't run the snowglobes.simulate because it doesn't have material argument, and my detector isn't recognized by guess_material function.

Possible workarounds

  1. Name the detector wc_baikal_1clu so it would be recognized by guess_material. But we shouldn't impose our naming scheme on a user IMO - that's ugly and not needed.
  2. Add material parameter to snowglobes.simulate - but that would be changing the public interface! Unacceptable
  3. Do it manually: unpacking the tarfile and running the SimpleRate.run on each of the input files. But in this case I would really prefer not to make user work with the archive file just because someone might want to save the fluence as cache. Caching should be an option, not a default way.

2. Using presupernova models

  • I want to calculate the response of a presupernova model in KamLAND detector.
  • I pass the model to generate_* just to find out that it only searches for ccsn models (Problem❗) - but this will be solved by Improve generate_* functions in snewpy.snowglobes #209
  • If this is fixed, I find another problem❗: the energy binning in the generate_* functions is hardcoded:
    energy = np.linspace(0, 100, 501) * MeV # 1MeV
    and this might be too coarse for the preSN model, which has average energy ~2 MeV. It might be even more significant because generate_* doesn't do actual energy integration, but just samples the flux at the bin centers.

Possible workarounds

  1. Add the energy_bins argument to generate_* - but that's change of existing interface! Unacceptable.
  2. Do everything manually (as you suggest). If you think that's a proper way, you should write this in the first page of the SNEWPY manual.

What I suggest

is to make classes that allows an easy way of tweaking the usual processing chain. So that users can decide on the energy/time binning, and work with that values the way they want - integrate, slice, scale the flux, and compute the event rates flexibly. Currently snewpy is not capable of this.

@JostMigenda
Copy link
Member

Thanks for describing these high-level use cases you’re interested in; this helps make the problem much clearer!

Regarding 1. (new detector configuration):

If the new detector is at an early stage, where it can be described by “similar to [existing configuration], but with different mass”, the workaround I would recommend is to use that existing configuration and just multiply the resulting event rates with a scale factor. (That’s e.g. how we got Hyper-K numbers for the SNEWS 2.0 white paper; or what I’m currently using to write a SN section for another detector white paper.)
If the new detector is at a later stage, where it makes sense to generate custom smearing/efficiency files, it would be best to add all that to SNOwGLoBES so we officially support that detector in the next snewpy release.

If those workarounds aren’t enough, we could try to extend guess_material to guess the material based on the weighting factor in $SNOWGLOBES/detector_configurations.dat—e.g. 0.1111 always corresponds to water, 0.025 always corresponds to argon, etc. (Arguably, it’s a bit awkward that SNOwGLoBES requires the material as an extra argument, rather than including it in that file; maybe we could also make a PR upstream to make that easier and solve this problem for us?)


Regarding 2. (preSN models):

I think ideally, the difference between preSN and ccSN models should be almost transparent to users of snewpy. As far as reasonably possible, they should be able to use the same code to plot (or calculate event rates for or do any other analysis with) any model.

Right now, I agree that the existing snewpy.snowglobes implementation is based on some assumptions that make that impossible. We need to make some changes—and that will most likely have to include some backwards-incompatible changes that break existing code. I’m not saying we can’t do that; but (a) we shouldn’t do it lightly, (b) we should give users deprecation warnings ahead of time where possible and (c) we should save actual breaking changes for v2.0, following SemVer.


Under the hood, a NeutrinoFlux class might make a lot of sense. However, I am very hesitant to make this available to users and give them broad access to tweak internals, rather than require them to go through a high-level interface like snewpy.snowglobes. Because if we make something available to users as our public API, there is an implicit promise that it will remain stable; so if our public API is fairly large, we either hurt ourselves (because we limit our ability to change it in the future) or our users (if we make too many breaking changes). Now, this is not an exact science and we can argue about what frequency of breaking changes is acceptable, what the trade-off against new features is, etc. Maybe I am being too careful here.

I see a couple of possible options now:

  1. try to update snewpy.snowglobes while minimizing breaking changes (whether or not something like this NeutrinoFlux class is used under the hood)—this is the conservative option, but not sure whether it fully works
  2. update snewpy.snowglobes; base it on this NeutrinoFlux class and expose (parts of) that to the user—this potentially has the disadvantages noted above, plus it is confusing to have to different ways for users to do the same thing
  3. deprecate snewpy.snowglobes completely and recommend users go to the new implementation—still has the disadvantages noted above, but is potentially clearer than 2.

None of these options feels right; I’m torn. Let me think about this some more …

@Sheshuk
Copy link
Contributor Author

Sheshuk commented Oct 9, 2022

Thank you for your answer.

Arguably, it’s a bit awkward that SNOwGLoBES requires the material as an extra argument, rather than including it in that file; maybe we could also make a PR upstream to make that easier and solve this problem for us

That looks the most natural solution to me, but it might happen only in the distant future. Other options look quite bad for the user experience and.


(a) we shouldn’t do it lightly, (b) we should give users deprecation warnings ahead of time where possible and (c) we should save actual breaking changes for v2.0, following SemVer.

While I totally agree on your points (a) and (b) , the link in (c) says increment the MAJOR version when you make incompatible API changes, not make incompatible API changes when you increment the MAJOR version 😄

I mean that v2.0 is not a New Year, and when it comes should be defined only by the changes we put to snewpy - not vice versa. Please don't use this as an argument about what changes should or should not be made.


And finally I want to stress again, I didn't propose any changes for the public interface at all.
I only explained to you why I can't work within this public interface (as you suggest to me).

Let's agree on this point: our public interface doesn't work for those use cases, and we are not ready to change it (yet?).

The solution I propose to this is:

  • Make a lower-level interface, covering these use cases.
  • Make the public interface use this lower-level classes, without modifying the public signatures.
  • Mention this lower-level interface as "internal" and point that it might change, we give no compatibility guarantee etc. We can evade any "implicit promises" to users by being explicit.
  • Use this lower-level interface for some time - to check and refine it, collect possible feedback from our users/colleagues etc.
  • Finally usage of this lower-level interface will show us what changes are needed for the public interface. When (after thorough thinking and discussion) we're ready to change it, the cleaner object-oriented implementation will give us the needed flexibility.

@JostMigenda
Copy link
Member

Sorry for the late response!
So, yes, for usage “under the hood”, I have no issue with this NeutrinoFlux class; I can see how it will probably be useful for the planned FlavorTransformation updates.

I’ll just repeat one point regarding the implementation, so it doesn’t get lost:

I’m sceptical about the axes: Dict[str, Quantity] argument. If users try to employ any axes other than flavor, time and energy (in this exact order), matrix multiplication with a FlavorTransformation will return nonsense values. Would __init__(self, data: Quantity[1/s/MeV], times: Quantity[s], energies: Quantity[MeV]) work better? (The Flavor axis will always be identical the same 4/6 flavours, so we don’t need that as an explicit argument.)

@Sheshuk
Copy link
Contributor Author

Sheshuk commented May 11, 2023

We discussed with @JostMigenda the implementation and agreed to the following points:

1. Make NeutrinoFlux class

Class members

  • data 3D array of astropy.Quantity[1/MeV/s]
  • flavor 1D array of Flavor
  • time 1D array of astropy.Quantity[s]
  • energy 1D array of astropy.Quantity[MeV]

2. NeutrinoFlux will be produced by existing SupernovaModel.get_flux method.

3. NeutrinoFlux can be passed to SimpleRate.run method, to calculate the events rate.

4. NeutrinoFlux can be sliced

to get values for individual flavors, energy/time ranges etc.

When sliced, produce another NeutrinoFlux with the given subset of data. But in all conditions keep data as 3D array, even if some dimensions, like flavor, is a single value.

4. NeutrinoFlux can be integrated

to produce instances of Rate,Fluence or Events classes:

flowchart TB
NeutrinoFlux-- integral_time --> NeutrinoFluence
NeutrinoFlux-- integral_energy --> NeutrinoRate
NeutrinoRate -- integral_time --> NeutrinoEvents
NeutrinoFluence -- integral_energy --> NeutrinoEvents
Loading

Implementation of Rate, Fluence, Events classes can be similar to Flux, and initially just s subclasses of the latter.

The different class names state the different physics meaning for these objects, so that user can treat them differently.

5. NeutrinoFlux (and subclasses) should have methods to save and load from file.

This can be useful in case one needs to store computed values, e.g. caching calculations.

6. These classes might become a part of the public interface in the future

as an alternative to the global functions generate_*, simulate, collate. This approach can be flexible, and cover some use cases which current interface can't.

Of course, only after we test and refine it.
(Jost, please correct me if I missed or misunderstood anything)

@Sheshuk Sheshuk self-assigned this May 17, 2023
@Sheshuk Sheshuk added enhancement New feature or request and removed suggestion An idea that needs to be discussed/approved before starting implementaion labels May 20, 2023
Sign up for free to join this conversation on GitHub. Already have an account? Sign in to comment
Labels
enhancement New feature or request
Projects
None yet
Development

Successfully merging a pull request may close this issue.

2 participants