Skip to content
New issue

Have a question about this project? Sign up for a free GitHub account to open an issue and contact its maintainers and the community.

By clicking “Sign up for GitHub”, you agree to our terms of service and privacy statement. We’ll occasionally send you account related emails.

Already on GitHub? Sign in to your account

Is it worth optimizing the cache for XSAdditiveModels #1782

Open
DougBurke opened this issue May 24, 2023 · 1 comment
Open

Is it worth optimizing the cache for XSAdditiveModels #1782

DougBurke opened this issue May 24, 2023 · 1 comment

Comments

@DougBurke
Copy link
Contributor

For XSAdditive models, we can cache the data with norm = 1, and then just multiply the cache value by norm. This would technically be faster (since there are times when we end up evaluating the model with everything the same but the normalization), however we don't have good tests to see if it would really help.

Technically this is true for any linear model with a norm/ampl style parameter, but we don't have a way of identifying this automatically whereas we do for XSPEC additive models.

@DougBurke
Copy link
Contributor Author

So, several issues to think about, not in order

  • we should remove the norm parameter from the C++ interface (so we don't apply it and then have to remove it before caching); this complicates the python call routine (it needs to know to apply the norm) but would simplify the C++ code since we currently have a template to decide whether to apply the norm or not
  • the cache is applied to the python model instance rather than the function from sherpa.astro.xspec._xspec, which limits the sharing
  • tracking down whether this is beneficial or not is harder than I'd like for various reasons involved with the cache code

Sign up for free to join this conversation on GitHub. Already have an account? Sign in to comment
Projects
None yet
Development

No branches or pull requests

1 participant