Skip to content
New issue

Have a question about this project? Sign up for a free GitHub account to open an issue and contact its maintainers and the community.

By clicking “Sign up for GitHub”, you agree to our terms of service and privacy statement. We’ll occasionally send you account related emails.

Already on GitHub? Sign in to your account

Extremely slow to converge #32

Open
ferrigno opened this issue Jan 5, 2022 · 1 comment
Open

Extremely slow to converge #32

ferrigno opened this issue Jan 5, 2022 · 1 comment

Comments

@ferrigno
Copy link
Contributor

ferrigno commented Jan 5, 2022

Hi Johannes,

I am trying to use BXA to compare two models on an XMM-Newton data set and I have issues of convergence, probably due to some parameter degeneracy, even if models are not very complex:

constant * TBabs * pow (This one converged in an hour or so)
constant * TBabs * pcfabs * pow

This last one has been running for more than one day and it does not look to be converging.
The warning is:
/pyenv/versions/3.8.2/lib/python3.8/site-packages/ultranest/integrator.py:1632: UserWarning: Sampling from region seems inefficient (0/40 accepted in iteration 2500). To improve efficiency, modify the transformation so that the current live points (stored for you in BXA-pcfabs/extra/sampling-stuck-it%d.csv) are ellipsoidal, or use a stepsampler, or set frac_remain to a lower number (e.g., 0.5) to terminate earlier.
warnings.warn(warning_message)

I have already put frac_reami=0.5 in my parameters, but I do not understand well this thing of ellipsoids...

Do you have some suggestions to increase efficiency ?

You find attached some files from the run

debug.log
sampling-stuck-it%d.csv

The WIP notebook is in
https://gitlab.astro.unige.ch/xmm-newton-workflows/0862410301
which uses the call to BXA in
https://gitlab.astro.unige.ch/ferrigno/pysas/-/blob/master/pyxmmsas/__init__.py#L420

@JohannesBuchner
Copy link
Owner

Making a corner plot of the CSV file, gives me this:

sampling-stuck-it d csv

Most of the plots look ellipsoidal. However, in the first panel of the second row, it seems there is a inverted L-shape.
This can be inefficient to sample with the MLFriends algorithm. Your choices are to:

  • try to reparameterize so that it is easier to sample
  • to use a different algorithm (for example, a step sampler), to continue.
  • If you know that some regions of the parameter space are unreasonable, update the prior.

Looking at the two arms of this L:

The left half tightly constrains the first parameter to a narrow range, while the right allows it to be wider.

I don't know what your parameters mean, but I want to give you an example of a beneficial reparametrization.
Lets say, the first and third parameter are each a normalisation of components, each with a log-uniform prior. The data constrain to first order the sum of these components. Therefore, a L shape appears, where at least one of the two component normalisations has to exceed a threshold. To remove the L shape, one can use a total normalisation parameter, and a relative normalisation parameter (e.g., the ratio between the two components, or similar).

Sign up for free to join this conversation on GitHub. Already have an account? Sign in to comment
Labels
None yet
Projects
None yet
Development

No branches or pull requests

2 participants