Skip to content
New issue

Have a question about this project? Sign up for a free GitHub account to open an issue and contact its maintainers and the community.

By clicking “Sign up for GitHub”, you agree to our terms of service and privacy statement. We’ll occasionally send you account related emails.

Already on GitHub? Sign in to your account

JOSS review: Documentation #25

Open
grburgess opened this issue Mar 14, 2021 · 2 comments
Open

JOSS review: Documentation #25

grburgess opened this issue Mar 14, 2021 · 2 comments

Comments

@grburgess
Copy link

In the documentation several important features are listed but without explanation. It would be helpful if these were explained quantitatively with examples / comparisons:

when systematically analysing a large data-set

How does this differ from what is available in XSPEC/Sherpa?

when comparing multiple models

This is explained well for those initiated with Bayes factor model comparison, but a link to that part of the docs would be useful to make the point

when analysing low counts data-set

This claim is made here and here as well as the normal use of proper Poisson likelihoods. What specific features makes BXA shine in comparison? Can links be provided as well as comparative examples?

when you don’t want to babysit your fits

What is meant by this statement? As stated above, can examples be linked/ shown that validate this?
I'm assuming it is because there is a stopping criteria in nested sampling, but is this meant to inform the user that they do not need to access the correctness of their posterior?

when you don’t want to test MCMC chains for their convergence

I suppose this could be answered with the previous question.

In general, it could be important to fully separate what is a feature or strength of BXA vs what is inherited from the package it relies on / compliments.

@grburgess
Copy link
Author

linking openjournals/joss-reviews#3045

@JohannesBuchner
Copy link
Owner

Thank you for catching this. The paper and doc are now updated to state:

BXA shines especially

when systematically analysing a large data-set, or

when comparing multiple models

when analysing low counts data-set with realistic models

because its robust and unsupervised fitting algorithm explores even complicated parameter spaces in an automated fashion. Unlike existing approaches, the user does not need to apply problem-specific algorithm parameter tuning, initialise to good starting points and check for convergence, allowing building automated analysis pipelines.

The main point is that automatic analyses can be performed, which is difficult with the available MCMC implementations (because they need to be combined with custom initialisation and termination criteria).

While a numerical study that compares the performance of MCMC flavors+init+convergence checks and BXA on a example problem is beyond the scope of this paper/documentation, the paper text was updated to go into more detail on the differences to existing MCMC implementations. I focus on highlighting the difference in nature of the approaches (complete solution with initialisation and termination, no algorithm parameters need to be tuned to the problem). It is certainly possible to do low-count analyses also with the existing MCMC capabilities, at least with mono-modal gaussian-like posteriors.

Sign up for free to join this conversation on GitHub. Already have an account? Sign in to comment
Labels
None yet
Projects
None yet
Development

No branches or pull requests

2 participants