-
Notifications
You must be signed in to change notification settings - Fork 10
New issue
Have a question about this project? Sign up for a free GitHub account to open an issue and contact its maintainers and the community.
By clicking “Sign up for GitHub”, you agree to our terms of service and privacy statement. We’ll occasionally send you account related emails.
Already on GitHub? Sign in to your account
Instrumental resolution broadening #245
Commits on Jul 19, 2023
-
Instrument broadening: initial rough implementation
Some obvious places for further improvement: - Access from CLI interface - Optimise for use with multiple data series (i.e. for Spectrum1DCollection and Spectrum2D we don't need to keep recalculating the adaptive broadening parameters.) - Lorentzians? - Exact broadening? But this is enough to play with and open discussion about API.
Configuration menu - View commit details
-
Copy full SHA for f76a652 - Browse repository at this point
Copy the full SHA f76a652View commit details -
Instrument broadening: test case to check polynomial vs fixed-sigma
This is expected to evolve as implementation/API are tweaked, but makes a good sanity check as we go.
Configuration menu - View commit details
-
Copy full SHA for f34c2e5 - Browse repository at this point
Copy the full SHA f34c2e5View commit details -
Instrument broadening from Spectrum1D: fix weighting
There seem to be two y-axis conventions at work; for "sparse" data that has not been binned, the weights indicate a symmetry weighting or possible neutron scattering intensity. For data in a spectrum, there is also a weight for the bin width to compensate for bin-size effects. The variable-width broadening machinery currently expects the former (which makes sense for adaptive DOS broadening) and converts to the latter (which is the expected form for spectrum objects). Broadening _from_ a spectrum object therefore needs to account for this.
Configuration menu - View commit details
-
Copy full SHA for 3b17b54 - Browse repository at this point
Copy the full SHA 3b17b54View commit details -
Instrument broadening test: fix trouble with Numpy versions
Figure out which versions of the random generators both a) work b) don't raise deprecation warnings for our range of supported numpy versions. What's especially annoying is that RandomState isn't even the current recommended way of doing this; according to Numpy docs the best-practice is now `rng = np.random.default_rng(seed)`, with the resulting Generator providing random() and integers() methods. But that doesn't exist in older versions, so we'll probably want to migrate this in future.
Configuration menu - View commit details
-
Copy full SHA for 3192fc3 - Browse repository at this point
Copy the full SHA 3192fc3View commit details -
Instrument broadening: use numpy Polynomial and Quantities
Requiring a special Tuple is a _tiny_ bit awkward, but it's easier than defining a Class for the same information! The unit handling is extra-clunky to handle the minimum dependencies; newer numpy/Pint combinations allow a lot more Numpy functions to be applied directly on Quantity objects. For now we have a new method on Spectrum1D, this might get refactored.
Configuration menu - View commit details
-
Copy full SHA for e02e7cc - Browse repository at this point
Copy the full SHA e02e7ccView commit details -
Pint workarounds for Numpy 1.16 and earlier
A lot of Numpy functions strip units from Quantites or otherwise misbehave up to version 1.16, whereas with 1.17 and later these function calls can be much cleaner. Unintuitively, this is linked more to the Numpy version than the Pint version.
Configuration menu - View commit details
-
Copy full SHA for 0384b51 - Browse repository at this point
Copy the full SHA 0384b51View commit details -
Spectrum1d poly broadening to euphonic.broadening; test not moved
While getting things working, refactor to pure functions.
Configuration menu - View commit details
-
Copy full SHA for 8354898 - Browse repository at this point
Copy the full SHA 8354898View commit details -
Configuration menu - View commit details
-
Copy full SHA for 15b9a5b - Browse repository at this point
Copy the full SHA 15b9a5bView commit details -
Configuration menu - View commit details
-
Copy full SHA for 24fc747 - Browse repository at this point
Copy the full SHA 24fc747View commit details -
Configuration menu - View commit details
-
Copy full SHA for d5e8eac - Browse repository at this point
Copy the full SHA d5e8eacView commit details -
Naive implementation of 2D spectral broadening by loop over slices
Results look good and don't seem horribly slow
Configuration menu - View commit details
-
Copy full SHA for add4d4d - Browse repository at this point
Copy the full SHA add4d4dView commit details -
Rework euphonic-dos energy broadening arguments
- Polynomial broadening not included yet; just use the first width value - Current API is too ambiguous, sometimes we want to layer adaptive and instrumental broadening. Solve this by - Introducing explicit --adaptive-scale and --instrument-broadening, with existing --energy-broadening dispatching as appropriate - Over subsequent major versions --energy-broadening can become a simple alias to --instrument-broadening and --instrument-broadening can be dropped. - Where Gaussian broadening is applied on top of adaptive broadening, do this efficiently by calculating appropriate widths and apply once.
Configuration menu - View commit details
-
Copy full SHA for 2231612 - Browse repository at this point
Copy the full SHA 2231612View commit details -
Hack to stabilise 2-D scripts; broaden by first element
Changing energy-broadening to nargs='?' results in a list and broke energy broadening in all CLI programs. These will soon be updated to use all values; for now just take the first as width.
Configuration menu - View commit details
-
Copy full SHA for 510cde1 - Browse repository at this point
Copy the full SHA 510cde1View commit details -
euphonic-dos: accept polynomial energy resolution
An interesting problem is that if we try to use the width-dependent broadening function for fixed-width, the results don't quite agree. It's not a big problem to avoid this, and probably more efficient. But it's weird, shouldn't it just create two kernels, align one of them to the exact width and then mix the results to 100% "lower"? Is it the actual convolution implementation giving different results (e.g. due to re-binning?)
Configuration menu - View commit details
-
Copy full SHA for 301e91b - Browse repository at this point
Copy the full SHA 301e91bView commit details -
Refactor polynomial_broadening to use arbitrary function
Part of migration to more open-ended API
Configuration menu - View commit details
-
Copy full SHA for a01e5d4 - Browse repository at this point
Copy the full SHA a01e5d4View commit details -
Configuration menu - View commit details
-
Copy full SHA for 217eea4 - Browse repository at this point
Copy the full SHA 217eea4View commit details -
Refactor remaining polynomial broadening functions
The (tested) polynomial width functions are now thin wrappers around more general width-function based broadeners
Configuration menu - View commit details
-
Copy full SHA for 0702afb - Browse repository at this point
Copy the full SHA 0702afbView commit details -
Check broadened 2D properly, implement slight tolerance
Differences are of order 1e-12 so not deeply concerning. It would be good to know exactly where they come from, though...
Configuration menu - View commit details
-
Copy full SHA for d502845 - Browse repository at this point
Copy the full SHA d502845View commit details -
Spectrum1D broadening: refactor to method
- Remove resulting circular import from euphonic.broadening. It's a little messy for now but will be ok when the Spectrum broadening functions are removed. - Refactor bin width checks to a Spectrum method. We gain some lines of code but should keep logic a bit clearer. - Feels natural to make this an assertion but currently we just raise a warning so this needs wrapping. - cli.dos now a bit cleaner, just calls broaden() method either way - 1D sanity check moved from broadening tests to spectrum tests
Configuration menu - View commit details
-
Copy full SHA for 3d0fd1d - Browse repository at this point
Copy the full SHA 3d0fd1dView commit details -
Cleanup: remove functional interface to 1D spectrum variable-width
The low-level variable broadening function still lives there, but Spectrum1D(Collection)-specific details can exist as methods only.
Configuration menu - View commit details
-
Copy full SHA for 19e5ea9 - Browse repository at this point
Copy the full SHA 19e5ea9View commit details -
Spectum2D: refactor variable-width broadening into class
- This is a touch messier to handle the various permutations of x and y broadening; bring in a whole private staticmethod that operates on Spectrum2D. - Further streamlining certainly possible, but maybe it can wait until we implement 2-D covariance input (xy_width) - Another general streamlining approach would be to somehow ditch the fixed-width broadening implementation and make the variable-width work efficiently and accurately with fixed-width inputs
Configuration menu - View commit details
-
Copy full SHA for 530235c - Browse repository at this point
Copy the full SHA 530235cView commit details -
Configuration menu - View commit details
-
Copy full SHA for d4d9e9d - Browse repository at this point
Copy the full SHA d4d9e9dView commit details -
Configuration menu - View commit details
-
Copy full SHA for 44d50c3 - Browse repository at this point
Copy the full SHA 44d50c3View commit details -
Configuration menu - View commit details
-
Copy full SHA for 137acbd - Browse repository at this point
Copy the full SHA 137acbdView commit details -
Configuration menu - View commit details
-
Copy full SHA for 85a0351 - Browse repository at this point
Copy the full SHA 85a0351View commit details -
Variable-width Lorentzian broadening in euphonic.broadening
- New parametrisation for Lorentzian width/error relationship - We may want to look at the Gaussian one too, it seems to overestimate acceptable spacing for lowest error range. - Lorentzian gamma _is_ FWHM and std seems to be undefined for this function!
Configuration menu - View commit details
-
Copy full SHA for f4a29c2 - Browse repository at this point
Copy the full SHA f4a29c2View commit details -
Pass variable Lorentzian from euphonic-dos (looks wrong atm!)
Currently width "1 0 0" gives a wider broadening than "1" so something is being scaled inappropriately somewhere...
Configuration menu - View commit details
-
Copy full SHA for a51ee2c - Browse repository at this point
Copy the full SHA a51ee2cView commit details -
Correct variable-width Lorentizian gamma
- missing shape argument was causing a factor of ~2 (for Gaussian FWHM/sigma)
Configuration menu - View commit details
-
Copy full SHA for b7e20ca - Browse repository at this point
Copy the full SHA b7e20caView commit details -
Configuration menu - View commit details
-
Copy full SHA for 7bc4972 - Browse repository at this point
Copy the full SHA 7bc4972View commit details -
Remove test for error on euphonic-dos with --adaptive and Lorentz
This combination of arguments is now allowed! Slightly concerning that the test hangs on my machine rather than fail cleanly, but anyway it seems ok on CI and I suppose a crash is better than a false Pass...
Configuration menu - View commit details
-
Copy full SHA for 5568885 - Browse repository at this point
Copy the full SHA 5568885View commit details -
Configuration menu - View commit details
-
Copy full SHA for 7d98fa3 - Browse repository at this point
Copy the full SHA 7d98fa3View commit details -
Fixed-width Lorentzian broadening: make more exact
The existing scheme truncates the Lorentzian function to 3 sigma. However, truncating Lorentzians is a poor way of making them more efficient as the tails die away very slowly. Other approaches are available, but for now it is safest if this exact implementation always evaluates the kernel over twice the bin range. This guarantees no truncation. For similar reasons the numerical normalisation might not be the best idea. It may be more correct for a long tail to carry intensity out of the plot range and reduce the overall visible magnitude. It doesn't really cost more to calculate the theoretical normalisation constant, so let's use that instead.
Configuration menu - View commit details
-
Copy full SHA for d13ecda - Browse repository at this point
Copy the full SHA d13ecdaView commit details -
Update Lorentzian-broadened unit test data
Lorentzian kernels now calculated over full bin range, so every y value has changed in the affected data. Plot posted to Github PR, looks like a small change. Inspecting the 1D data we can see that blocks of zeros (outside truncation range) now contain data.
Configuration menu - View commit details
-
Copy full SHA for 4004925 - Browse repository at this point
Copy the full SHA 4004925View commit details -
Update DOS script test for Lorentzian broadening
- range cutoff has been eliminated, changing results
Configuration menu - View commit details
-
Copy full SHA for 2d268ff - Browse repository at this point
Copy the full SHA 2d268ffView commit details -
Lorentzian broadening: restore naive normalisation
This is not ideal; in the case of very large broadening it will tend to overestimate peak intensities. However, evaluating the kernel with "correct" scaling derived from bin widths can drastically overestimate peak heights at narrow broadening width. There are better ways of doing this; the width range can be constrained, or histogram integrals can be worked out properly. However, this is not generally done and can lead to results that look strange compared to other implementations (including the Scipy Gaussian broadening) that use simple normalisation. Hopefully we can revisit this some time.
Configuration menu - View commit details
-
Copy full SHA for f896bf4 - Browse repository at this point
Copy the full SHA f896bf4View commit details -
Update Lorentzian broadening test data
- Although we have reverted to normalised kernels, increasing the calculation range has slightly changed the normalisation magnitude
Configuration menu - View commit details
-
Copy full SHA for 2ca5e50 - Browse repository at this point
Copy the full SHA 2ca5e50View commit details -
Fix error->warning capture for broadening irregular bins
It's quite common for band structure calcs to end up with slightly mismatched q-sampling segments, so we allow it until a better option is made available.
Configuration menu - View commit details
-
Copy full SHA for 7ec3163 - Browse repository at this point
Copy the full SHA 7ec3163View commit details -
Spectrum1d Lorentzian broadening unit test: data update
The kernel range has slightly changed, which in turn affects normalisation. When plotted, the change to the test example is indistinguishable.
Configuration menu - View commit details
-
Copy full SHA for 39a4004 - Browse repository at this point
Copy the full SHA 39a4004View commit details -
Update spectrum2d unit test data for Lorentzian broadening
Justification same as 1d. The differences are actually a little bit larger in these cases, with a max of around 2%. Visually we can see nothing too wacky is happening.
Configuration menu - View commit details
-
Copy full SHA for 04b6028 - Browse repository at this point
Copy the full SHA 04b6028View commit details -
Update powder-map script test for Lorentzian broadening
In this data we can clearly see how the extended tail replaces zeros that previous lay outside the truncation region.
Configuration menu - View commit details
-
Copy full SHA for e9c8e56 - Browse repository at this point
Copy the full SHA e9c8e56View commit details -
Powder-map polynomial broadening
Accept multiple arguments and try to set up polynomials. - Have checked that this doesn't break existing - Initial manual checks also look ok
Configuration menu - View commit details
-
Copy full SHA for 370a17b - Browse repository at this point
Copy the full SHA 370a17bView commit details -
2D variable-width broadening: unittest
Test files a bit big but makes them easy to visually debug.
Configuration menu - View commit details
-
Copy full SHA for 4b6fb49 - Browse repository at this point
Copy the full SHA 4b6fb49View commit details -
Configuration menu - View commit details
-
Copy full SHA for 04147bf - Browse repository at this point
Copy the full SHA 04147bfView commit details -
Fix TOSCA-like broadening doctest
Error messages were very confusing and misleading, suggesting that there were problems with defining and importing things. Actually it was just normal problems...
Configuration menu - View commit details
-
Copy full SHA for 7a155c5 - Browse repository at this point
Copy the full SHA 7a155c5View commit details -
Configuration menu - View commit details
-
Copy full SHA for 368b2c9 - Browse repository at this point
Copy the full SHA 368b2c9View commit details
Commits on Jul 21, 2023
-
Configuration menu - View commit details
-
Copy full SHA for afafe9d - Browse repository at this point
Copy the full SHA afafe9dView commit details -
Broadening test updates: use old cubic parametrisation
For new broadening methods we default to the new Chebyshev-log(energy) fits. But test data is for the old cubic fit... First we make sure that the option to select method is still working and existing tests pass.
Configuration menu - View commit details
-
Copy full SHA for 26f5dca - Browse repository at this point
Copy the full SHA 26f5dcaView commit details
Commits on Jul 24, 2023
-
Configuration menu - View commit details
-
Copy full SHA for f3eab2e - Browse repository at this point
Copy the full SHA f3eab2eView commit details
Commits on Jul 25, 2023
-
Use cheby-log fit for CLI-DOS instrumental broadening if no adaptive
This logic case only applies when there is no adaptive broadening, so backward compabibility is not an issue.
Configuration menu - View commit details
-
Copy full SHA for 8a14a7e - Browse repository at this point
Copy the full SHA 8a14a7eView commit details -
Configuration menu - View commit details
-
Copy full SHA for 6e844a4 - Browse repository at this point
Copy the full SHA 6e844a4View commit details -
Update unit tests for cheby-log broadening parametrisation
Very few changes needed! Most tests are for self-consistency or against exact broadening. Added one new reference dataset for spectrum2d.
Configuration menu - View commit details
-
Copy full SHA for ee10daf - Browse repository at this point
Copy the full SHA ee10dafView commit details
Commits on Aug 24, 2023
-
Configuration menu - View commit details
-
Copy full SHA for 46daa85 - Browse repository at this point
Copy the full SHA 46daa85View commit details -
Slight correction to cheby-log parametrisation gives slight difference in corresponding test ref
Configuration menu - View commit details
-
Copy full SHA for 7a38183 - Browse repository at this point
Copy the full SHA 7a38183View commit details -
Improve safety checks on parametrisation, update tests
The safe range for cheby-log fit is a bit smaller than the data range; it goes a bit crazy at the lowest two points!
Configuration menu - View commit details
-
Copy full SHA for 3f208d0 - Browse repository at this point
Copy the full SHA 3f208d0View commit details
Commits on Aug 25, 2023
-
Configuration menu - View commit details
-
Copy full SHA for 0ac4699 - Browse repository at this point
Copy the full SHA 0ac4699View commit details -
Configuration menu - View commit details
-
Copy full SHA for d5e4d91 - Browse repository at this point
Copy the full SHA d5e4d91View commit details -
Configuration menu - View commit details
-
Copy full SHA for 7ea6e4b - Browse repository at this point
Copy the full SHA 7ea6e4bView commit details