# HMeta d tutorial

#Welcome to the HMeta-d wiki!

Fitting of group-level data in the HMeta-d toolbox requires identical data preparation to that required when obtaining single-subject fits using MLE or SSE using Maniscalco & Lau’s MATLAB code (http://www.columbia.edu/~bsm2105/type2sdt/). This page therefore starts with a short tutorial on preparing data for estimating single-subject meta-d’, before explaining how to input data from a group of subjects into the hierarchical model.

#Preparing confidence rating data

Data from each subject need to be coerced into two vectors, nR_S1 and nR_S2, which contain confidence-rating counts for when the stimulus was S1 and S2, respectively. Each vector has length k*2, where k is the number of ratings available. Confidence counts are entered such that the first entry refers to counts of maximum confidence in an S1 response, and the last entry to maximum confidence in an S2 response. For example, if three levels of confidence rating were available and nR_S1 = [100 50 20 10 5 1], this corresponds to the following rating counts following S1 presentation:

• responded S1, rating=3 : 100 times
• responded S1, rating=2 : 50 times
• responded S1, rating=1 : 20 times
• responded S2, rating=1 : 10 times
• responded S2, rating=2 : 5 times
• responded S2, rating=3 : 1 time

This pattern of responses corresponds to responding “high confidence, S1” most often following S1 presentations, and least often with “high confidence, S2”. A mirror image of this vector would be expected for nR_S2. For example, nR_S2 = [3 7 8 12 27 89] corresponds to the following rating counts following S2 presentation:

• responded S1, rating=3 : 3 times
• responded S1, rating=2 : 7 times
• responded S1, rating=1 : 8 times
• responded S2, rating=1 : 12 times
• responded S2, rating=2 : 27 times
• responded S2, rating=3 : 89 times

Together these vectors specify the confidence stimulus x response matrix that is the basis of the meta-d’ fit, and can be passed directly into Maniscalco & Lau’s fit_meta_d_MLE function to estimate meta-d’ on a subject by subject basis.

#Fitting a hierarchical model

Estimating a group-level model using HMeta-d requires very little extra work. In HMeta-d, the nR_S1 and nR_S2 variables are cell arrays of vectors, with each entry in the cell containing confidence counts for a single subject. For example, to specify the confidence counts following S1 presentation listed above for subject 1, one would enter in MATLAB:

``````nR_S1{1} = [100 50 20 10 5 1]
``````

and so on for each subject in the dataset. These cell arrays then contain confidence counts for all subjects, and are passed in one step to the main HMeta-d function:

``````fit = fit_meta_d_mcmc_group(nR_S1, nR_S2)
``````

An optional third argument to this function is mcmc_params which is a structure containing flags for choosing different model variants, and for specifying the details of the MCMC routine. If omitted reasonable default settings are chosen.

The call to fit_meta_d_mcmc_group returns a “fit” structure with several subfields. The key parameter of interest is fit.mu_logMratio, which is the mean of the posterior distribution of the group-level log(meta-d’/d’). fit.mcmc contains the samples of each parameter, which can be plotted with the helper function plotSamples. For instance to plot the MCMC samples of , one would enter:

``````plotSamples(exp(fit.mcmc.samples.mu_logMratio))
``````

Note the “exp” to allow plotting of meta-d’/d’ rather than log(meta-d’/d’). The exampleFit_ scripts in the toolbox provide other examples, such as how to set up response-conditional models and to visualise subject-level fits.

An important step in model fitting is checking that the MCMC chains have converged. While there is no way to guarantee convergence for a given number of MCMC samples, some heuristics can help identify problems. By using plotSamples, we can visualise the traces to check that there are no drifts or jumps and that each chain occupies a similar position in parameter space. Another useful statistic is Gelman & Rubin’s scale-reduction statistic , which is stored in the field fit.mcmc.Rhat for each parameter (Gelman & Rubin, 1992). This provides a formal test of convergence that compares within-chain and between-chain variance of different runs of the same model, and will be close to 1 if the samples of the different chains are similar. Large values of indicate convergence problems and values < 1.1 suggest convergence.

As well as obtaining an estimate for group-level meta-d’/d’, we are often interested in our certainty in this parameter value. This can be estimated by computing the 95% high-density interval (HDI) from the posterior samples (Kruschke, 2014). The helper function calc_HDI takes as input a vector of samples and returns the 95% HDI:

``````calc_HDI(exp(fit.mcmc.samples.mu_logMratio(:))
``````

The colon in the brackets selects all samples in the array regardless of its chain of origin. As HMeta-d uses Bayesian estimation it is straightforward to use the group-level posterior density for hypothesis testing. For instance, if the question is whether one group of subjects has greater metacognitive efficiency than a second group, we can ask whether the HDI of the difference overlaps with zero. However, note that it is incorrect to use the subject-level parameters estimated as part of the hierarchical model in a frequentist test (e.g. a t-test); this violates the independence assumption.

In addition to enabling inference on individual parameter distributions, there may be circumstances in which we wish to compare models of different complexity. To enable this, JAGS returns the deviance information criteria for each model ((Spiegelhalter, Best, Carlin, & Van Der Linde, 2002); DIC; lower is better). While the DIC is known to be somewhat biased towards models with greater complexity, it is a common method for assessing model fit in hierarchical models. In HMeta-d the DIC for each model can be obtained in fit.mcmc.dic.

##### Clone this wiki locally
You can’t perform that action at this time.
Press h to open a hovercard with more details.