Skip to content
Lower-bound N for MPT model selection using FIA (based on multiTree output)
Branch: master
Clone or download
Fetching latest commit…
Cannot retrieve the latest commit at this time.
Type Name Latest commit message Commit time
Failed to load latest commit information.


Lower-bound N for MPT model selection using FIA (based on multiTree output)

MPT Model Selection Using Minimum Description Length (MDL / FIA)

Multinomial processing tree (MPT) models are often used in psychology to disentangle latent cognitive processes resulting in categorical responses. Statistical model selection of competing MPT models is often used to test psychological theories.

The MPT software multiTree (Moshagen, 2010) allows to compute the Fisher information approximation (FIA), a model selection criterion based on the minimum description length principle similar to AIC or BIC. Essentially, FIA allows to make a trade-off between the fit and complexity of the models under consideration. FIA has the advantage that it takes the functional complexity of the models into account (e.g., how the parameters in the MPT model are connected and whether order constraints such as Do>Dn are included).

However, FIA is only an approximation that can fail if the number of observations is too small, leading to severely biased model selection. As a remedy, it should only be used if the total number of observations (usually the numer of responses times the number of participants) exceeds a lower bound (Heck, Moshagen, & Erdfelder, 2014).

The Excel/LibreOffice sheets available in FIAminimumN allow to compute this lower-bound sample size for FIA:

The remaining files (.mdt) are the corresponding multiTree files containing the competing MPT models that are used as an example.


You can’t perform that action at this time.