Skip to content
New issue

Have a question about this project? Sign up for a free GitHub account to open an issue and contact its maintainers and the community.

By clicking “Sign up for GitHub”, you agree to our terms of service and privacy statement. We’ll occasionally send you account related emails.

Already on GitHub? Sign in to your account

What happens in case of a filtering failure? #36

Closed
christelpei opened this issue May 8, 2017 · 3 comments
Closed

What happens in case of a filtering failure? #36

christelpei opened this issue May 8, 2017 · 3 comments
Assignees
Labels

Comments

@christelpei
Copy link

christelpei commented May 8, 2017

I am doing inference with pmcmc and see repeated filtering failures. The manual says that this corresponds to the situation in which the log likelihood is below the parameter tol for all particles (is it actually the likelihood as the default value of tol is 1e-17 not -17?)).
Unfortunately, I have not found what happens at this point - does the simulation just continue until particles (hopefully) increase in likelihood, are particles reset, does something else happen...? I wonder particularly as it seems that the time scale of changes in the traces of my simulations seems to grow after an initial phase of faster (though not consistent) changes. It would be helpful to learn more to improve on the convergence of pmcmc (similar questions seem to have have been touched in #13).

@kingaa
Copy link
Owner

kingaa commented May 9, 2017

@christelpei: thanks for the good question.

As you say, a filtering failure occurs when the conditional likelihood of every particle is below the tolerance tol, which has the default value 1e-17.

When a failure occurs, there is no resampling, and the particle filter simply presses on. In effect, the datum at the failing time point is taken to contain no information. The likelihood assigned is not zero, but rather is equal to the tolerance.

As a general rule, filtering failures are a sign of incompatibility between model (at least at the parameter point in question) and data. One wants to see all the failures go away before one makes any inference. Of course, it's not necessary that the problems lie with the model: persistent, isolated filtering failures may also indicate errors in the data.

As for improving the convergence of pmcmc, I would recommend using mif2 to rapidly locate the heights of the likelihood surface first, so that you can initialize the relatively costly pmcmc algorithm at good locations. Second, use an adaptive proposal distribution such as that provided by mvn.rw.adaptive.

@christelpei
Copy link
Author

Thanks for the feedback - I have played around with mvn.rw.adaptive but will probably have to work on the model to get better convergence/less filtering failures. It would however be helpful for me to better understand the warnings I get:

warnings()
Warnmeldungen:
1: in 'pfilter': 8 filtering failures occurred.
2: in 'pfilter': 6 filtering failures occurred.
3: in 'pfilter': 14 filtering failures occurred.
...

In which context are the warnings given and how do I interprete the number of filtering failures, i.e. the likelihood of every particle was below tol 8 times (time steps?) out of which/how many situations (time steps?)?

@kingaa
Copy link
Owner

kingaa commented May 15, 2017

Yes, each individual filtering failure is the failure of any of the particles to be compatible with the data at one time point, where compatibility is defined in terms of a likelihood threshold.

One thing you can do is try to determine where the filtering failures are occurring and whether they are occurring at particular places in the time series. With a single pfilter run, you can extract the effective sample size (ESS, see eff.sample.size). Filtering failures correspond to points where the ESS is zero. The purpose of such an exercise is to determine whether there are particular data-points that are particularly difficult for the model to explain.

In general, when the model parameters are highly suboptimal, many data-points will be incompatible with the model. One expects failures in such a case. When pmcmc has converged, and is doing a good job sampling from the posterior, one expects to see such failures go away, provided the model and data are not incompatible.

@kingaa kingaa closed this as completed Jun 5, 2017
@kingaa kingaa self-assigned this Jul 27, 2020
Sign up for free to join this conversation on GitHub. Already have an account? Sign in to comment
Labels
Projects
None yet
Development

No branches or pull requests

2 participants