You signed in with another tab or window. Reload to refresh your session.You signed out in another tab or window. Reload to refresh your session.You switched accounts on another tab or window. Reload to refresh your session.Dismiss alert
For XSAdditive models, we can cache the data with norm = 1, and then just multiply the cache value by norm. This would technically be faster (since there are times when we end up evaluating the model with everything the same but the normalization), however we don't have good tests to see if it would really help.
Technically this is true for any linear model with a norm/ampl style parameter, but we don't have a way of identifying this automatically whereas we do for XSPEC additive models.
The text was updated successfully, but these errors were encountered:
we should remove the norm parameter from the C++ interface (so we don't apply it and then have to remove it before caching); this complicates the python call routine (it needs to know to apply the norm) but would simplify the C++ code since we currently have a template to decide whether to apply the norm or not
the cache is applied to the python model instance rather than the function from sherpa.astro.xspec._xspec, which limits the sharing
tracking down whether this is beneficial or not is harder than I'd like for various reasons involved with the cache code
For XSAdditive models, we can cache the data with norm = 1, and then just multiply the cache value by norm. This would technically be faster (since there are times when we end up evaluating the model with everything the same but the normalization), however we don't have good tests to see if it would really help.
Technically this is true for any linear model with a norm/ampl style parameter, but we don't have a way of identifying this automatically whereas we do for XSPEC additive models.
The text was updated successfully, but these errors were encountered: