Skip to content
New issue

Have a question about this project? Sign up for a free GitHub account to open an issue and contact its maintainers and the community.

By clicking “Sign up for GitHub”, you agree to our terms of service and privacy statement. We’ll occasionally send you account related emails.

Already on GitHub? Sign in to your account

Instrumental resolution broadening #245

Merged
merged 57 commits into from
Aug 25, 2023
Merged

Instrumental resolution broadening #245

merged 57 commits into from
Aug 25, 2023

Conversation

ajjackson
Copy link
Collaborator

This implements energy-dependent resolution broadening with 1-D polynomial functions.

The idea is that while Euphonic does not contain detailed instrument models, users can obtain a polynomial fit to energy-dependent broadening (derived empirically or calculated using something like PyChop or McStas) and provide the polynomial coefficients instead of a single broadening "width". We could provide information in tutorials to help with this without overcomplicating the Euphonic library.

It re-uses the interpolated broadening machinery used for adaptive DOS broadening. As this resolution broadening is energy-dependent, it would normally be performed after other calculation and binning steps.

There are some obvious places for further improvement and consideration:

  • Access from CLI (as above, I like the idea that we simply use the existing width parameter and allow it to be passed multiple values)
  • Optimisation for use with multiple data series: currently the approximate broadening only works on 1D data. We can make a simple loop to fill out Spectrum1DCollection or Spectrum2D, but this would involve a bit of redundant recalculation of kernels. There is also some redundant histogram-binning overhead because the inner function doesn't know that the data is already binned.
  • Lorentzian functions?
  • Exact broadening as a reference for interpolated method?

But for now we should discuss the API. How much of this stuff should live in euphonic.broadening on Spectrum classes or elsewhere?

I also note a convention difference between euphonic.broadening which works with "width" in sigma (standard deviation) and the outer functions which treat width as FWHM. It's quite possible users will have a mixture of both and the conversion is simple, but the internals could be more clear and consistent about this.

@github-actions
Copy link
Contributor

github-actions bot commented Oct 18, 2022

Unit Test Results

     16 files  ±  0       16 suites  ±0   33m 2s ⏱️ + 10m 8s
   986 tests +  7     981 ✔️ +  7    5 💤 ±0  0 ±0 
6 860 runs  +49  6 822 ✔️ +49  38 💤 ±0  0 ±0 

Results for commit 568856d. ± Comparison against base commit fde1945.

♻️ This comment has been updated with latest results.

euphonic/broadening.py Outdated Show resolved Hide resolved
@rebeccafair
Copy link
Member

  • Access from CLI (as above, I like the idea that we simply use the existing width parameter and allow it to be passed multiple values)

I also like this idea. It might be nice if we could reuse Spectrum1D.broaden and allow width to be both a Quantity or List[Quantity]. But we have to be careful this doesn't become an everything argument, are there other types of broadening we might want to add in the future?

Currently the functions take the widths and their units separately, which is nicer than having to input redundant [1*ureg('meV'), 2*ureg('meV')] etc., but it does clash with convention in the rest of Euphonic. I guess we could accept both a plain float (in which case assume units) and Quantity, or maybe just require Quantities for now and worry about this later.

  • Optimisation for use with multiple data series: currently the approximate broadening only works on 1D data. We can make a simple loop to fill out Spectrum1DCollection or Spectrum2D, but this would involve a bit of redundant recalculation of kernels. There is also some redundant histogram-binning overhead because the inner function doesn't know that the data is already binned.
  • Lorentzian functions?

I guess initially we could just implement the Spectrum1DCollection in a loop and do some quick profiling to see if it is an issue?

With Spectrum2D we might need some more alterations to be correct, because really we should have a 2D kernel with appropriate widths in each dimension and convolve with that rather than doing a 1D convolution in each dimension (I think?). The Lorentzian broadening in Euphonic does 2 x 1D convolutions instead of a 2D convolution so I think its actually not completely correct.

Ignoring the 2D issue for now, we could probably add 1D Lorentzian kernels in _width_interpolated_broadening without too much trouble? But for now we could just error if a user tries to use polynomial and a Lorentzian shape.

But for now we should discuss the API. How much of this stuff should live in euphonic.broadening on Spectrum classes or elsewhere?

I think euphonic.broadening seems like the right place for broaden_with_polynomial. We could call that functionality from Spectrum1D.broaden if we accept a list in addition to a scalar width, provided that we don't think this argument isn't going to blow up and try to do too many things in the future

I also note a convention difference between euphonic.broadening which works with "width" in sigma (standard deviation) and the outer functions which treat width as FWHM. It's quite possible users will have a mixture of both and the conversion is simple, but the internals could be more clear and consistent about this.

Good point! I like the use of width_convention, if it's something that's too much for this PR we could add it to the broaden functions later?

@mducle
Copy link
Member

mducle commented Oct 19, 2022

  • Access from CLI (as above, I like the idea that we simply use the existing width parameter and allow it to be passed multiple values)

I also like this idea. It might be nice if we could reuse Spectrum1D.broaden and allow width to be both a Quantity or List[Quantity]. But we have to be careful this doesn't become an everything argument, are there other types of broadening we might want to add in the future?

Since in principle for neutron spectrometer resolutions the broadening width is a function of Q and ω, the most general width argument we can specify would be a function or lambda which takes Qh, Qk, Ql, and E as input and returns the width at that point. Horace does this but I've never used this functionality, so I'm not sure if we should design for it - maybe the best thing is to restrict width here to be a scalar or list and to have another argument (e.g. width_function) for something more general if there is a demand for it.

One thing about the polynomial interpolation - should we consider how to specify the range/limits in which the interpolation is valid. The polynomial would only be an approximation to the resolution function within some range; but if you have e.g. modes which are very very high in energy, it might be outside the range of validity of the polynomial - can we assume that users would check this themselves? Or should we have another parameter, e.g. width_poly_limits which would raise an error or warning if there are modes outside this...

@ajjackson
Copy link
Collaborator Author

One thing about the polynomial interpolation - should we consider how to specify the range/limits in which the interpolation is valid.

Internally we could pass around that new Polynomial object instead of the coefficient list. This has optional domain and window attributes which could correspond to those input range limits.

Already I find it necessary to apply a floor to output values that get too small, otherwise we have sub-bin peak widths that create increasingly tall peaks without a perceivable difference in width.

Arguably it's on the user to choose an appropriate energy range, anyway...

@ajjackson
Copy link
Collaborator Author

Since in principle for neutron spectrometer resolutions the broadening width is a function of Q and ω, the most general width argument we can specify would be a function or lambda which takes Qh, Qk, Ql, and E as input and returns the width at that point.

Even more general would be a 4x4 covariance matrix, but we aren't ready for that yet 😅 We should also consider whether that sort of thing would live in Euphonic or another code... But anyway here we are broadening a pre-calculated spectrum, and Euphonic spectra only go up to 2D. We don't have that 4-D input coordinate at the point this is applied; such a routine would need to be part of the StructureFactor methods to calculate 1d averages and S(|q|,ω) maps.

We could call that functionality from Spectrum1D.broaden if we accept a list in addition to a scalar width, provided that we don't think this argument isn't going to blow up and try to do too many things in the future

Right, I do worry that adding more and more options to broaden() will become unsustainable. Allowing the existing width argument to take a list, or even a function, is not too bad. Would we also need options to clarify which axes the function refers to? What about width_unit, width_lower_limit, width_convention, adaptive_error? I'm not fond of multi-mode functions with arguments that do nothing in most cases.

Re: width_unit, the reason it exists is that in a polynomial not all the coefficients have the same unit. E.g. the TOSCA resolution polynomial is σ = 1e-7 cm-1 E2/cm-2 + 5e-3 E cm-1/cm-1 + 2.5 cm-1. So it can't be stored correctly as a Quantity! But maybe internally something like Tuple(Polynomial, Quantity) would work where Quantity is a scale factor containing the unit - by which the input is divided and the output is multiplied.

@ajjackson
Copy link
Collaborator Author

A neat benefit of working with Polynomial is that in principle one can also use Chebyshev. But in practice these resolution functions are so smooth that it will make no difference...

@rebeccafair
Copy link
Member

Re: width_unit, the reason it exists is that in a polynomial not all the coefficients have the same unit. E.g. the TOSCA resolution polynomial is σ = 1e-7 cm-1 E2/cm-2 + 5e-3 E cm-1/cm-1 + 2.5 cm-1. So it can't be stored correctly as a Quantity! But maybe internally something like Tuple(Polynomial, Quantity) would work where Quantity is a scale factor containing the unit - by which the input is divided and the output is multiplied.

Ah of course, I hadn't thought of that. In that case perhaps it is better to have broaden_with_polynomial as a separate function. Should the default width_convention be FWHM like broaden?

@ajjackson
Copy link
Collaborator Author

ajjackson commented Oct 21, 2022

Yeah, FWHM is more consistent with Euphonic in general. (And PyChop, actually...) It's just that the usual TOSCA function is in σ / cm-1 and that's what I've been testing with 😅

@codecov-commenter
Copy link

codecov-commenter commented Oct 24, 2022

Codecov Report

Merging #245 (7ea6e4b) into master (0944a24) will decrease coverage by 0.76%.
The diff coverage is 81.86%.

❗ Your organization is not using the GitHub App Integration. As a result you may experience degraded service beginning May 15th. Please install the Github App Integration for your organization. Read more.

@@            Coverage Diff             @@
##           master     #245      +/-   ##
==========================================
- Coverage   96.53%   95.78%   -0.76%     
==========================================
  Files          28       28              
  Lines        3807     3958     +151     
  Branches      752      798      +46     
==========================================
+ Hits         3675     3791     +116     
- Misses         77       98      +21     
- Partials       55       69      +14     
Files Changed Coverage Δ
euphonic/cli/intensity_map.py 96.61% <ø> (ø)
euphonic/qpoint_frequencies.py 99.13% <ø> (ø)
euphonic/qpoint_phonon_modes.py 98.81% <ø> (ø)
euphonic/cli/dos.py 89.04% <66.66%> (-10.96%) ⬇️
euphonic/broadening.py 82.29% <71.66%> (-17.71%) ⬇️
euphonic/spectra.py 95.12% <85.71%> (-1.49%) ⬇️
euphonic/cli/powder_map.py 91.94% <100.00%> (+1.10%) ⬆️
euphonic/cli/utils.py 91.09% <100.00%> (+0.27%) ⬆️

📣 We’re building smart automated test selection to slash your CI/CD build times. Learn more

@ajjackson
Copy link
Collaborator Author

ajjackson commented Oct 24, 2022

Re: width_unit, the reason it exists is that in a polynomial not all the coefficients have the same unit. E.g. the TOSCA resolution polynomial is σ = 1e-7 cm-1 E2/cm-2 + 5e-3 E cm-1/cm-1 + 2.5 cm-1. So it can't be stored correctly as a Quantity! But maybe internally something like Tuple(Polynomial, Quantity) would work where Quantity is a scale factor containing the unit - by which the input is divided and the output is multiplied.

Ah of course, I hadn't thought of that. In that case perhaps it is better to have broaden_with_polynomial as a separate function. Should the default width_convention be FWHM like broaden?

Well, the other benefit to having a Spectrum-specific broadening tool is that is can handle the bin-width issue: spectral data is scaled so the overall intensity is unaffected by bin-width, whereas the lower-level broadening methods are designed to bin from raw data (which makes sense for an adaptive DOS). It's a bit of a headache keeping the units straight so nice to abstract that away.

It could be a pure function instead of a method. Maybe start with pure functions, then access from command-line tools, then after a while we can review and decide what would make most sense as a method?

@ajjackson
Copy link
Collaborator Author

Another API aspect I'm not sure about is limiting the functional to a Polynomial object. Although the type-hint says that, really the only features that matter are:

  • it's a callable that can take a dimensionless array x and return a corresponding array y
  • you can multiply the callable by a scale factor and this will impact the results

The former property could be satisfied by any function handle. The latter requires a suitable object and is e.g. also satisfied by the Chebyshev polynomial object. (For typical smooth resolution functions, the least-square Polynomial and minimax Chebyshev approaches will give very similar results, but maybe in some cases the Chebyshev truncation workflow is preferred?)

The second property is also not really needed, it can be handled by scaling the output instead of the object. Multiplying the coefficients is more computationally efficient, but I don't suppose that avoiding one array multiplication is really a big deal.

It's easy to foresee the utility of allowing an external, more complex, resolution function to be passed as a handle. I suppose a balance between YAGNI and future-proofing would be have width_polynomial as a named argument, which could later become optional by offering an alternative width_function.

@ajjackson ajjackson force-pushed the instrument-broadening branch 2 times, most recently from af4ca42 to 03fc2e6 Compare October 26, 2022 11:29
@ajjackson
Copy link
Collaborator Author

ajjackson commented Oct 26, 2022

An example of a more generic interface to form a discussion basis:

spectrum1d.broaden(x_width: Union[Quantity, Callable[[Quantity], Quantity]],
                   **kwargs) -> Spectrum1D

spectrum2d.broaden(x_width: Union[Quantity, Callable[[Quantity], Quantity]],
                   y_width: Union[Quantity, Callable[[Quantity], Quantity]],
                   **kwargs) -> Spectrum2D

which type-dispatches to fixed or variable-width implementations depending on inputs and also passes on the corresponding **kwargs.

Pros:

  • the Tuple is gone, things are well-defined w.r.t. Quantity again
  • concise syntax for interactive use, notebooks etc.
  • consistent, backward-compatible and covers 2D combinations of fixed/variable
  • although it doesn't support 2x2 covariance functions, it also doesn't imply that it would. Such a feature could be feasibly added to the same API as xy_width: Callable[[Quantity, Quantity], Quantity] where the callable output is a dimensioned 2x2 array and the inputs are (x, y) coordinates.
  • As the Callable doesn't have to be a Polynomial, one can easily e.g. wrap a call to PyChop:
    from pychop.Instruments import Instrument
    def my_resolution(energies: Quantity) -> Quantity:
        instrument = Instrument('MAPS', chopper='A', freq=400)
        return instrument.getResolution(energies.to('meV').magnitude,
                                        Ei_in=1000.) * ureg('meV')
    maps_spectrum = spectrum.broaden(y_width=my_resolution)
    • Along similar lines, it would be simple to whip up a Callable that interpolates tabulated data from experiment or McStas.

Cons:

  • The user or calling function gains responsibility to provide a Callable that deals with Quantities correctly. This isn't too bad in practice; the one we use for the command-line tools would be something like (lambda x: Polynomial(x.to(poly_unit) * poly_unit).
  • We can no longer make a rather pretty and efficient transformation between FWHM and STD conventions by multiplying the Polynomial coefficients.
  • A bunch of approximate-broadening-related keyword arguments become available even if you are just doing fixed-width broadening.
  • No way to specify widths that vary as a function the other axis
    • this could be achieved by also accepting Callable[[Quantity, Quantity], Quantity]. We would need a way to detect the difference between the two types of callable, but actually this is fairly easy: len(inspect.signature(callable).parameters).
  • What do we do about the existing method keyword? We're no closer to supporting irregular grids and it's not clear how this would interact with new modes.

@mducle
Copy link
Member

mducle commented Oct 26, 2022

I think the proposed syntax looks good!

  • No way to specify widths that vary as a function the other axis

Or you can add more keyword arguments - e.g. x_width_[of|from]_xy and y_width_[of|from]_xy but this might become a bit of a mouthful... (I'm not sure whether "of" or "from" makes more sense...

Or you can force users to use the xy_width keyword argument - this is essentially a diagonal covariance matrix right?

@ajjackson
Copy link
Collaborator Author

ajjackson commented Oct 26, 2022

Or you can force users to use the xy_width keyword argument - this is essentially a diagonal covariance matrix right?

Right, if that is available and people are trying to get that clever with their resolution functions then they might as well use this and load it with the desired off-diagonal terms.

I think I'd prefer that to implementing keywords for every permutation!

@rebeccafair
Copy link
Member

I definitely prefer this new proposed interface, the more consistent use of Quantity makes me feel less uneasy and the PyChop example is very nice.

Cons:

  • The user or calling function gains responsibility to provide a Callable that deals with Quantities correctly. This isn't too bad in practice; the one we use for the command-line tools would be something like (lambda x: Polynomial(x.to(poly_unit) * poly_unit).

Yeah, and as long as we document how to use it clearly, it shouldn't be too much trouble for users!

  • A bunch of approximate-broadening-related keyword arguments become available even if you are just doing fixed-width broadening.

I do worry about this, it's ok for now as we can just tell users to look at the docs for the polynomial_broadening function, but could get messy if we add more methods in the future. But I still think this way of doing this is better.

  • No way to specify widths that vary as a function the other axis

xy_width sounds good to me!

  • What do we do about the existing method keyword? We're no closer to supporting irregular grids and it's not clear how this would interact with new modes.

I think it's still relevant, the 'fast approximate' polynomial broadening still uses a convolve in the end and we probably still want to emit the same warning for in the case of irregular grids, right? We could in the future evaluate a function for each bin (like in calculate_dos which would be slow, but at least more correct, and I don't think adding the polynomial broadening to broaden precludes this.

@ajjackson
Copy link
Collaborator Author

ajjackson commented Oct 27, 2022

Something I've noticed in euphonic-dos that needs fixing up:

there is a logical pathway such that

  • if adaptive broadening and broadening width are both specified, the widths are multiplied together and applied
  • if adaptive broadening and Lorentzian broadening are both specified, an error is raised because "Currently only Gaussian shape is supported with adaptive broadening"

There are two issues here:

  • this isn't dimensionally consistent with specifying further broadening as a width. The width of the Gaussian function resulting from the convolution of two Gaussians is $\sigma_{1*2} =\sqrt{\sigma_1^2 + \sigma_2^2} $. Statistically this can be understood as variances $\sigma^2$ being additive when normally-distributed errors are combined.
  • It would never make sense to use a Lorentzian for adaptive DOS broadening as it has the wrong statistics for that job. However it is quite reasonable to apply Lorentzian broadening to represent some optical or scattering phenomenon on a distribution obtained by adaptive broadening. So when shape='lorentz' we should assume that the user means to apply Lorentzian broadening to the result of the (Gaussian) adaptive broadening. We aren't set up to interpolate the 2-D space of those function combinations, so this should be done in a separate step.

@ajjackson
Copy link
Collaborator Author

ajjackson commented Oct 27, 2022

Ah, looking at the help text I see that the existing behaviour is actually documented.

                eb_help = (
                    'If using fixed width broadening, the FWHM of broadening '
                    'on energy axis in ENERGY_UNIT (no broadening if '
                    'unspecified). If using adaptive broadening, this is a '
                    'scale factor multiplying the mode widths (no scale '
                    'factor applied if unspecified).')

I'm not sure that scaling the adaptive broadening is more useful than adding a fixed width to it, but I suppose that both are legitimate choices. Hmm, how best to handle this... 🤔

I would like to allow instrumental resolution function functions to be used with adaptive broadening but it seems incompatible with this behaviour. And we just ticked a major version number so shouldn't be breaking backward compatibility. (Although I'm also keen to switch that default implementation from 'reference' to 'fast'...)

@rebeccafair
Copy link
Member

Ah, looking at the help text I see that the existing behaviour is actually documented.

                eb_help = (
                    'If using fixed width broadening, the FWHM of broadening '
                    'on energy axis in ENERGY_UNIT (no broadening if '
                    'unspecified). If using adaptive broadening, this is a '
                    'scale factor multiplying the mode widths (no scale '
                    'factor applied if unspecified).')

I'm not sure that scaling the adaptive broadening is more useful than adding a fixed width to it, but I suppose that both are legitimate choices. Hmm, how best to handle this 🤔

I was just about to reply with this haha. The conversion from mode gradients to widths scales on the cell size and an estimate of the q-point spacing (which is based on the number of q-points due to symmetry weighting, I don't think this is completely correct but I think this is what CASTEP implements if I remember), which I wasn't sure was totally correct or would cover all cases so I wanted to make it easy to change this scaling so I put it in the --eb parameter. I didn't think of the case of someone doing adaptive broadening then doing some fixed with broadening after. Along the same line I didn't think of someone doing adaptive broadening then fixed width Lorentzian broadening.

@ajjackson
Copy link
Collaborator Author

Yeah, when you think of the feature as "broadening" it seems weird to need two steps. But if you think of "improved DOS convergence" and "instrumental resolution" as different features it makes sense!

@rebeccafair
Copy link
Member

I would like to allow instrumental resolution function functions to be used with adaptive broadening but it seems incompatible with this behaviour. And we just ticked a major version number so shouldn't be breaking backward compatibility. (Although I'm also keen to switch that default implementation from 'reference' to 'fast'...)

This might be a terrible idea, but we could make it so --eb can take multiple values, in which case we treat them as broadening coefficients. We keep --adaptive + --eb with one value as the scaling factor it is at the moment and put a warning that this will change in the future, if someone wants to use --adaptive + --eb as $\sigma_{1*2} =\sqrt{\sigma_1^2 + \sigma_2^2} $
as you mentioned, they can hack it by doing something like --eb 1.5 0.0, and in some future version we switch the behaviour for --eb with one value.

@ajjackson
Copy link
Collaborator Author

ajjackson commented Oct 27, 2022

I suppose the natural endpoint then would be to have separate --adaptive-scale and --energy-broadening arguments. But because --energy-broadening already exists, the path to transition is a bit ugly:

Euphonic v1.x

--adaptive-scale: if using adaptive broadening, apply dimensionless scale factor
--instrument-broadening: fixed or energy-dependent broadening step, either integrated into adaptive Gaussian widths or as a separate step for Lorentzian.
--energy-broadening: Existing behaviour, dispatches to others depending on adaptive mode. Raises a warning recommending you be more specific and use one of the others. Could also have your suggested behaviour that any nargs>1 is interpreted as instrument broadening.

Euphonic v2.x

--adaptive-scale: as above
--energy-broadening: equivalent to v1.x --instrument-broadening
--instrument-broadening: aliased to --energy-broadening, raises DeprecationWarning

Euphonic v3.x

--adaptive-scale: as above
--energy-broadening: as v2.x

@rebeccafair
Copy link
Member

This looks like a good path to me, or at least the best we're going to get!

@ajjackson
Copy link
Collaborator Author

Ok, I'll crack on with that scheme. Is there somewhere we should be recording this kind of API
/deprecation roadmap?

@rebeccafair
Copy link
Member

Create some issues and slap a Euphonic 2.0/3.0 label on them? If we end up with lots we can think about organising it a bit more...

- range cutoff has been eliminated, changing results
This is not ideal; in the case of very large broadening it will tend
to overestimate peak intensities. However, evaluating the kernel with
"correct" scaling derived from bin widths can drastically overestimate
peak heights at narrow broadening width.

There are better ways of doing this; the width range can be
constrained, or histogram integrals can be worked out
properly. However, this is not generally done and can lead to results
that look strange compared to other implementations (including the
Scipy Gaussian broadening) that use simple normalisation. Hopefully we
can revisit this some time.
- Although we have reverted to normalised kernels, increasing the
  calculation range has slightly changed the normalisation magnitude
It's quite common for band structure calcs to end up with slightly
mismatched q-sampling segments, so we allow it until a better option
is made available.
The kernel range has slightly changed, which in turn affects
normalisation. When plotted, the change to the test example is indistinguishable.
Justification same as 1d. The differences are actually a little bit
larger in these cases, with a max of around 2%. Visually we can see
nothing too wacky is happening.
In this data we can clearly see how the extended tail replaces zeros
that previous lay outside the truncation region.
Accept multiple arguments and try to set up polynomials.
- Have checked that this doesn't break existing
- Initial manual checks also look ok
Test files a bit big but makes them easy to visually debug.
Error messages were very confusing and misleading, suggesting that
there were problems with defining and importing things. Actually it
was just normal problems...
For new broadening methods we default to the new Chebyshev-log(energy)
fits. But test data is for the old cubic fit... First we make sure
that the option to select method is still working and existing tests pass.
@ajjackson
Copy link
Collaborator Author

I have updated the parametrisation of kernel width from nominal error: a wider-ranging, more robust version has been developed and is included in a manuscript to be submitted soon (:crossed_fingers:).

As the existing cubic fit was used for test data and for adaptive broadening, at first we will allow them to exist in parallel and have adaptive broadening default to the cubic fit.

We should drop the inferior fit in the next major version; will create a milestoned Issue for that.

This logic case only applies when there is no adaptive broadening, so
backward compabibility is not an issue.
Very few changes needed! Most tests are for self-consistency or
against exact broadening. Added one new reference dataset for
spectrum2d.
@ajjackson
Copy link
Collaborator Author

Comparing the new 2D test data (chebyshev polynomial log-energy fit) to existing (cubic fit) the difference is reassuringly small.

euphonic_cheby_sanity_check

from pathlib import Path

from euphonic import Spectrum2D
from euphonic.plot import plot_2d_to_axis
import matplotlib.pyplot as plt

fig, axes = plt.subplots(ncols=2, nrows=2)

data_dir = Path.home() / 'src/euphonic/tests_and_analysis/test/data/spectrum2d'

spectra = {'Unbroadened': Spectrum2D.from_json_file(data_dir / 'synthetic_x.json'),
           'cubic': Spectrum2D.from_json_file(data_dir / 'synthetic_x_poly_broadened.json'),
           'cheby-log': Spectrum2D.from_json_file(data_dir / 'synthetic_x_poly_broadened_cheby.json')
           }

diff = Spectrum2D(spectra['cheby-log'].x_data,
                  spectra['cheby-log'].y_data,
                  spectra['cheby-log'].z_data - spectra['cubic'].z_data)

spectra['diff'] = diff

fig.suptitle(
    f"Maximum value in broadened data: {spectra['cubic'].z_data.max():.4f} \n"
    f"Maximum difference in broadened data: {diff.z_data.max():.3e}")

for ax, (label, spectrum) in zip(axes.flatten(), spectra.items()):
    plot_2d_to_axis(spectrum, ax)
    ax.set_title(label)

fig.tight_layout()
fig.savefig('euphonic_cheby_sanity_check.png')

@ajjackson
Copy link
Collaborator Author

Small correction to error/width parametrisation breaks the corresponding test. The difference is actually negligible, so I will just update the reference file:

poly_2d_comparison

Slight correction to cheby-log parametrisation gives slight difference in corresponding test ref
The safe range for cheby-log fit is a bit smaller than the data range;
it goes a bit crazy at the lowest two points!
@ajjackson ajjackson merged commit d502f49 into master Aug 25, 2023
8 checks passed
Sign up for free to join this conversation on GitHub. Already have an account? Sign in to comment
Labels
None yet
Projects
None yet
Development

Successfully merging this pull request may close these issues.

None yet

4 participants