-
Notifications
You must be signed in to change notification settings - Fork 26
New issue
Have a question about this project? Sign up for a free GitHub account to open an issue and contact its maintainers and the community.
By clicking “Sign up for GitHub”, you agree to our terms of service and privacy statement. We’ll occasionally send you account related emails.
Already on GitHub? Sign in to your account
What happens in case of a filtering failure? #36
Comments
@christelpei: thanks for the good question. As you say, a filtering failure occurs when the conditional likelihood of every particle is below the tolerance When a failure occurs, there is no resampling, and the particle filter simply presses on. In effect, the datum at the failing time point is taken to contain no information. The likelihood assigned is not zero, but rather is equal to the tolerance. As a general rule, filtering failures are a sign of incompatibility between model (at least at the parameter point in question) and data. One wants to see all the failures go away before one makes any inference. Of course, it's not necessary that the problems lie with the model: persistent, isolated filtering failures may also indicate errors in the data. As for improving the convergence of pmcmc, I would recommend using mif2 to rapidly locate the heights of the likelihood surface first, so that you can initialize the relatively costly pmcmc algorithm at good locations. Second, use an adaptive proposal distribution such as that provided by |
Thanks for the feedback - I have played around with mvn.rw.adaptive but will probably have to work on the model to get better convergence/less filtering failures. It would however be helpful for me to better understand the warnings I get:
In which context are the warnings given and how do I interprete the number of filtering failures, i.e. the likelihood of every particle was below tol 8 times (time steps?) out of which/how many situations (time steps?)? |
Yes, each individual filtering failure is the failure of any of the particles to be compatible with the data at one time point, where compatibility is defined in terms of a likelihood threshold. One thing you can do is try to determine where the filtering failures are occurring and whether they are occurring at particular places in the time series. With a single In general, when the model parameters are highly suboptimal, many data-points will be incompatible with the model. One expects failures in such a case. When |
I am doing inference with pmcmc and see repeated filtering failures. The manual says that this corresponds to the situation in which the log likelihood is below the parameter tol for all particles (is it actually the likelihood as the default value of tol is 1e-17 not -17?)).
Unfortunately, I have not found what happens at this point - does the simulation just continue until particles (hopefully) increase in likelihood, are particles reset, does something else happen...? I wonder particularly as it seems that the time scale of changes in the traces of my simulations seems to grow after an initial phase of faster (though not consistent) changes. It would be helpful to learn more to improve on the convergence of pmcmc (similar questions seem to have have been touched in #13).
The text was updated successfully, but these errors were encountered: