-
Notifications
You must be signed in to change notification settings - Fork 1.9k
/
factor_analysis.Rmd
128 lines (91 loc) · 5.85 KB
/
factor_analysis.Rmd
1
2
3
4
5
6
7
8
9
10
11
12
13
14
15
16
17
18
19
20
21
22
23
24
25
26
27
28
29
30
31
32
33
34
35
36
37
38
39
40
41
42
43
44
45
46
47
48
49
50
51
52
53
54
55
56
57
58
59
60
61
62
63
64
65
66
67
68
69
70
71
72
73
74
75
76
77
78
79
80
81
82
83
84
85
86
87
88
89
90
91
92
93
94
95
96
97
98
99
100
101
102
103
104
105
106
107
108
109
110
111
112
113
114
115
116
117
118
119
120
121
122
123
124
125
126
---
layout: page
title: Factor Analysis
---
```{r options, echo=FALSE}
library(knitr)
opts_chunk$set(fig.path=paste0("figure/", sub("(.*).Rmd","\\1",basename(knitr:::knit_concord$get('infile'))), "-"))
```
## Factor Analysis
Before we introduce the next type of statistical method for batch effect correction, we introduce the statistical idea that motivates the main idea: Factor Analysis. Factor Analysis was first developed over a century ago. Karl Pearson noted that correlation between different subjects when the correlation was computed across students. To explain this, he posed a model having one factor that was common across subjects for each student that explained this correlation:
$$
Y_ij = \alpha_i W_1 + \varepsilon_{ij}
$$
with $Y_{ij}$ the grade for individual $i$ on subject $j$ and $\alpha_i$ representing the ability of student $i$ to obtain good grades.
In this example, $W_1$ is a constant. Here we will motivate factor analysis with a slightly more complicated situation that resembles the presence of batch effects. We generate a random $N \times 6$ matrix $\mathbf{Y}$ with representing grades in six different subjects for N different children. We generate the data in a way that subjects are correlated with some more than others:
```{r,echo=FALSE}
library(MASS)
library(rafalib)
n <- 250
p <- 6
set.seed(1)
g <- mvrnorm(n,c(0,0),matrix(c(1,0.5,0.5,1),2,2))
Ystem <- g[,1] + matrix(rnorm(n*p/2,0,0.65),n,p/2)
Yhum <- g[,2] + matrix(rnorm(n*p/2,0,0.65),n,p/2)
Y <- cbind(Ystem,Yhum)
colnames(Y) <- c("Math","Science","CS","Eng","Hist","Classics")
```
#### Sample correlations
Note that we observe high correlation across the six subjects:
```{r}
round(cor(Y),2)
```
A graphical look shows that the correlation suggests a grouping of the subjects into STEM and the humanities.
In the figure below, high correlations are red, no correlation is white and negative correlations are blue (code not shown).
```{r correlation_images,fig.cap="Images of correlation between columns. High correlation is red, no correlation is white, and negative correlation is blue.",echo=FALSE,fig.width=10.5,fig.height=5.25}
library(RColorBrewer)
mypar(1,2)
cols=colorRampPalette(rev(brewer.pal(11,"RdBu")))(100)
eps = matrix(rnorm(n*p),n,p)
par(mar = c(8.1, 8.1, 3.5, 2.1))
image(1:ncol(Y),1:ncol(Y),cor(Y)[,6:1],xaxt="n",yaxt="n",col=cols,xlab="",ylab="",zlim=c(-1,1),main="Actual Data")
axis(1,1:ncol(Y),colnames(Y),las=2)
axis(2,1:ncol(Y),rev(colnames(Y)),las=2)
image(1:ncol(Y),1:ncol(Y),cor(eps)[,6:1],xaxt="n",yaxt="n",col=cols,xlab="",ylab="",zlim=c(-1,1),main="Independet Data")
axis(1,1:ncol(Y),colnames(Y),las=2)
axis(2,1:ncol(Y),rev(colnames(Y)),las=2)
```
The figure shows the following: there is correlation across all subjects, indicating that students have an underlying hidden factor (academic ability for example) that results in subjects begin correlated since students that test high in one subject tend to test high in the others. We also see that this correlation is higher with the STEM subjects and within the humanities subjects. This implies that there is probably another hidden factor that determines if students are better in STEM or humanities. We now show how these concepts can be explained with a statistical model.
#### Factor model
Based on the plot above, we hypothesize that there are two hidden factors $\mathbf{W}_1$ and $\mathbf{W}_2$ and, to account for the observed correlation structure, we model the data in the following way:
$$
Y_{ij} = \alpha_{i,1} W_{1,j} + \alpha_{i,2} W_{2,j} + \varepsilon_{ij}
$$
The interpretation of these parameters are as follows: $\alpha_{i,1}$ is the overall academic ability for student $i$ and $\alpha_{i,2}$ is the difference in ability between the STEM and humanities for student $i$. Now, can we estimate the $W$ and $\alpha$ ?
#### Factor analysis and PCA
It turns out that under certain assumptions, the first two principal components are optimal estimates for $W_1$ and $W_2$. So we can estimate them like this:
```{r}
s <- svd(Y)
What <- t(s$v[,1:2])
colnames(What)<-colnames(Y)
round(What,2)
```
As expected, the first factor is close to a constant and will help explain the observed correlation across all subjects, while the second is a factor differs between STEM and humanities. We can now use these estimates in the model:
$$
Y_{ij} = \alpha_{i,1} \hat{W}_{1,j} + \alpha_{i,2} \hat{W}_{2,j} + \varepsilon_{ij}
$$
and we can now fit the model and note that it explains a large percent of the variability.
```{r}
fit = s$u[,1:2]%*% (s$d[1:2]*What)
var(as.vector(fit))/var(as.vector(Y))
```
The important lesson here is that when we have correlated units, the standard linear models are not appropriate. We need to account for the observed structure somehow. Factor analysis is a powerful way of achieving this.
#### Factor analysis in general
In high-throughput data, it is quite common to see correlation structure. For example, notice the complex correlations we see across samples in the plot below. These are the correlations for a gene expression experiment with columns ordered by date:
```{r gene_expression_correlations, fig.cap="Image of correlations. Cell (i,j) represents correlation between samples i and j. Red is high, white is 0 and red is negative.",message=FALSE}
library(Biobase)
library(GSE5859)
data(GSE5859)
n <- nrow(pData(e))
o <- order(pData(e)$date)
Y=exprs(e)[,o]
cors=cor(Y-rowMeans(Y))
cols=colorRampPalette(rev(brewer.pal(11,"RdBu")))(100)
mypar()
image(1:n,1:n,cors,col=cols,xlab="samples",ylab="samples",zlim=c(-1,1))
```
Two factors will not be enough to model the observed correlation structure. However, a more general factor model can be useful:
$$
Y_{ij} = \sum_{k=1}^K \alpha_{i,k} W_{j,k} + \varepsilon_{ij}
$$
And we can use PCA to estimate $\mathbf{W}_1,\dots,\mathbf{W}_K$. However, choosing $k$ is a challenge and a topic of current research. In the next section we describe how exploratory data analysis might help.