Setting high precision for a nonlinear latent variable: a comment #236
Replies: 1 comment 2 replies
-
Hi Guido, Join modelling with non-linear predictors (through the product of the "bias parameter" and the random field) can be sensitive to the specifics of the problem. E.g. if one data source dominates the other, or the model is mis-specified. For some problems, the joint latent mode is in a "bad" place, pushing the bias parameter away from zero, which perhaps is what may have happened in your case. While this happens, the combined effect my still be estimated appropriately; the problem can be a partial non-identifiability of the individual components. We are investing this further, and I would be interested in having a look at your specific case, to see if it's something that can be addressed, and/or properly explained. I notice you set The |
Beta Was this translation helpful? Give feedback.
-
Dear Finn,
I have another question about model convergence.
I am testing a datafusion model similar to the one described in the paper of Villejo et al. (2024) "A Data Fusion Model for Meteorological Data using the INLA-SPDE method", available through arxiv.org.
The objective is to estimate a common latent field between ground (sparse) observations and model-based continuous gridded surfaces in Europe. As in the Villejo's paper, a multiplicative bias parameter alpha is introduced to take into account a bias between the two data sources. I understand that inlabru is able to manage this non-linearity through a Taylor expansion.
For this multiplicative parameter, I set in the component definition :alpha(1, mean.linear=1 and prec.linear=950).
My model converges only if the precision of the alpha parameter (prec.linear) is set to a high value (above 900). The higher the density of the mesh the higher the precision of the parameter must be to get model convergence.
Despite the high precision, the posterior marginal distribution of alpha has a reasonable value mean value of around 1.4, So, the high precision of alpha, on one hand, is necessary for model convergence, on the other hand does not impede the posterior distribution estimation of alpha.
I have examined the bru_convergence_plot and the posterior distributions of the fixed and random effects. All the diagnostics seem to be good. The model convergence met. And the alpha parameter has a reasonable posterior distribution.
Do you have any comment about the high precision for alpha for getting model convergence?
Best,
Guido
Beta Was this translation helpful? Give feedback.
All reactions