-
Notifications
You must be signed in to change notification settings - Fork 0
/
est_mlr.Rmd
52 lines (40 loc) · 3.94 KB
/
est_mlr.Rmd
1
2
3
4
5
6
7
8
9
10
11
12
13
14
15
16
17
18
19
20
21
22
23
24
25
26
27
28
29
30
31
32
33
34
35
36
37
38
39
40
41
42
43
44
45
46
47
48
49
50
51
52
---
title: 'Estimation: MLR'
site: workflowr::wflow_site
output:
workflowr::wflow_html:
toc: true
editor_options:
chunk_output_type: console
---
Maximum likelihood with robust standard errors (MLR) is a commonly used estimation method for structural equation models when observed data are continuous.
MLR is an estimation method under normal theory maximum likelihood where the observed data are assumed to follow a multivariate normal distribution.
The *robust* nature of MLR is in order to more accurately estimate standard errors.
In this study, the standard errors are not directly of interest, so we will focus on the estimation of the fit function under MLR where the resulting $F_{ML}$ is dervied.
A discussion of the standard errors is left a paper forthcoming on parameter estimation and recover in ML-CFA.
For ML-CFA under MLR estimation, the general idea is to find parameters ($\theta$) that maximize the likelihood function of the observed data given a distributional assumption (nromal theory typically in social science).
In ML-CFA, the model is composed of two major pieces: 1) a model for the pooled within-group covariance matrix ($\Sigma_W$), and 2) a model for the between group covariance matrix ($\Sigma_B$).
As shown in (GIVE REFERENCE TO PAGE THAT DESCRIBES THE ML-CFA MODEL), the sample estimators for the two population covariance matrices for each group ($j$) are
\[S_{W_j} = {(n_j -1)}^{-1} \sum_i^{n_j} (\mathbf{y}_{ij} - \bar{\mathbf{y}}_{j}){(\mathbf{y}_{ij} - \bar{\mathbf{y}}_{j})}^{\prime}\]
\[S_{gj} = n_j (\bar{\mathbf{y}}_{j} - \bar{\mathbf{y}}){(\bar{\mathbf{y}}_{j} - \bar{\mathbf{y}})}^{\prime}\]
where,
* $n_j$ is the sample size of group $j$ ($N = \sum_{\forall j} n_j);
* $\mathbf{y}_{ij}$ is the observed vector of responses for individual $i$ in group $j$;
* $\bar{\mathbf{y}}_{j}$ is the vector of average of observed responses in group $j$;
* $\bar{\mathbf{y}}$ is the vector of average responses across all groups;
* $S_{W_j}$ is the within-group covariance matrix of group $j$; and
* $S_{gj}$ is the between-group covariance matrix for group $j$.
The model-implied covariance matrices $\Sigma_W(\theta), \Sigma_B(\theta),$ and $\Sigma_{gj}(\theta)$ are needed for the MLR fit function.
We need the $\Sigma_{gj}(\theta)$ model-implied covariance matrix to help identify the average difference between the observed and model-implied covariances, which is defined as
\[\Sigma_{gj}(\theta) = \Sigma_B(\theta) + n_j^{-1}\Sigma_W(\theta)\]
which can be interpreted as the group size weighted deviation of group $j$ from the average group covariance matrix.
The maximum likelihood fit function is therefore:
\[F_{ML}=\sum_{j=1}^{J}(n_j-1) \left\lbrace \mathrm{log}\mid \Sigma_{W}(\theta) \mid + tr\left(\Sigma_W^{-1}(\theta)S_{w_j}\right)\right\rbrace + \sum_{j=1}^{J} \left\lbrace\mathrm{log}\mid \Sigma_{gj}(\theta) \mid + tr\left(\Sigma_{gj}^{-1}(\theta)S_{gj}\right)\right\rbrace\]
## Other Notes
MLR with continuous data closes matches with we have described above.
However, when the observed data are categroical, the estimation of the covariance matrices becomes much more computationally difficult.
The additional computational difficulties arise from the need to use numerical integration to compute the polychoric correlation among the observed categorical variables.
Which requires 2 dimensions of integrations for each correlation, as the number of variables increases, this computations may time a very long time.
The observed polychoric then has to be decomposed to a the within and between group components with is even more computationally burdensome.
Given these constaints on simply the estimation of the polychoric correlations, let along the comutational constraints on the model itself and with categorical data, MLR requires the use of numerical integration for each latent variable in the model.
The assumption that ordered categorical data with at least five response options can be used to drastically spend up convergence.