-
Notifications
You must be signed in to change notification settings - Fork 1
/
index.Rmd
37 lines (23 loc) · 3.06 KB
/
index.Rmd
1
2
3
4
5
6
7
8
9
10
11
12
13
14
15
16
17
18
19
20
21
22
23
24
25
26
27
28
29
30
31
32
33
34
35
36
37
---
title: "Home"
output:
html_document:
toc: false
---
`truncash` (Truncated ASH) is an exploratory project with Matthew, built on [`ashr`].
* [Matthew's initial observation on null, correlated data](voom_null.html)
Matthew did a quick investigation of the p values and z scores obtained for simulated null data (using just voom transform, no correction) from real RNA-seq data of [GTEx](http://www.gtexportal.org/home/). Here is what he found.
"I found something that I hadn’t realized, although is obvious in hindsight: although you sometimes see inflation under null of $p$-values/$z$-scores, the most extreme values are not inflated compared with expectations (and tend to be deflated). That is the histograms of $p$-values that show inflation near $0$ (and deflation near $1$) actually hide something different going on in the very left hand side near $0$. The qq-plots are clearer… showing most extreme values are deflated, or not inflated. This is expected under positive correlation i think. For example, if all $z$-scores were the same (complete correlation), then most extreme of n would just be $N(0,1)$. but if independent the most extreme of n would have longer tails..."
Matthew's initial observation inspired this project. If under positive correlation, the most extreme tend to be not inflated, maybe we can use them to control the false discoveries. Meanwhile, if the moderate are more prone to inflation due to correlation, maybe it's better to make only partial use of their information.
* [Occurrence of extreme observations](ExtremeOccurrence.html)
As [Prof. Michael Stein](https://galton.uchicago.edu/~stein/) pointed during a conversation with [Matthew](http://stephenslab.uchicago.edu/), if the marginal distribution is correct then the expected number exceeding any threshold should be correct. So if the tail is "usually"" deflated, it should be that with some small probability there are many large $z$-scores (even in the tail). Therefore, if "on average" we have the right number of large $z$-scores/small $p$-values, and "usually" we have too few, then "rarely" we should have too many. A simulation is run to check this intuition.
* [Step-down multiple comparison procedures on correlated null](StepDown.html)
If the most extreme $p$-values are never "too extreme" as Matthew observed, "step-down" procedures, starting with the most extreme $p$-values, should satisfactorily control FWER, even with generally inflated $z$-scores and hence skewed $p$-values.
* [`truncash` Model and first simulations](truncash.html)
[`ashr`]: https://github.com/stephens999/ashr
<!-- The goal of this new template is to simplify the setup and maintenance of a research website. -->
<!-- Specifically, -->
<!-- * Easier to build and extend the website using the new tools in the [rmarkdown][] package and [latest RStudio release][rstudio] -->
<!-- * Easier to deploy the website with Git and GitHub by only using one branch -->
<!-- [rmarkdown]: http://rmarkdown.rstudio.com/rmarkdown_websites.htm -->
<!-- [rstudio]: https://www.rstudio.com/products/rstudio/download/preview/ -->