-
Notifications
You must be signed in to change notification settings - Fork 8
/
intro.Rmd
356 lines (240 loc) · 12.6 KB
/
intro.Rmd
1
2
3
4
5
6
7
8
9
10
11
12
13
14
15
16
17
18
19
20
21
22
23
24
25
26
27
28
29
30
31
32
33
34
35
36
37
38
39
40
41
42
43
44
45
46
47
48
49
50
51
52
53
54
55
56
57
58
59
60
61
62
63
64
65
66
67
68
69
70
71
72
73
74
75
76
77
78
79
80
81
82
83
84
85
86
87
88
89
90
91
92
93
94
95
96
97
98
99
100
101
102
103
104
105
106
107
108
109
110
111
112
113
114
115
116
117
118
119
120
121
122
123
124
125
126
127
128
129
130
131
132
133
134
135
136
137
138
139
140
141
142
143
144
145
146
147
148
149
150
151
152
153
154
155
156
157
158
159
160
161
162
163
164
165
166
167
168
169
170
171
172
173
174
175
176
177
178
179
180
181
182
183
184
185
186
187
188
189
190
191
192
193
194
195
196
197
198
199
200
201
202
203
204
205
206
207
208
209
210
211
212
213
214
215
216
217
218
219
220
221
222
223
224
225
226
227
228
229
230
231
232
233
234
235
236
237
238
239
240
241
242
243
244
245
246
247
248
249
250
251
252
253
254
255
256
257
258
259
260
261
262
263
264
265
266
267
268
269
270
271
272
273
274
275
276
277
278
279
280
281
282
283
284
285
286
287
288
289
290
291
292
293
294
295
296
297
298
299
300
301
302
303
304
305
306
307
308
309
310
311
312
313
314
315
316
317
318
319
320
321
322
323
324
325
326
327
328
329
330
331
332
333
334
335
336
337
338
339
340
341
342
343
344
345
346
347
348
349
350
351
352
353
354
355
356
---
title: "Introduction to inferr"
date: "`r Sys.Date()`"
output: rmarkdown::html_vignette
vignette: >
%\VignetteIndexEntry{Introduction to inferr}
%\VignetteEngine{knitr::rmarkdown}
%\VignetteEncoding{UTF-8}
---
```{r, echo=FALSE, message=FALSE}
library(inferr)
```
Inferential statistics allows us to make generalizations about populations using data drawn from the population.
We use them when it is impractical or impossible to collect data about the whole population under study and instead,
we have a sample that represents the population under study and using inferential statistics technique, we make
generalizations about the population from the sample. **inferr** builds upon the solid set of statistical tests provided in **stats** package by including additional data types as inputs, expanding and restructuring the test results.
The **inferr** package:
- builds upon the statistical tests provided in **stats**
- provides additional and flexible options
- more detailed and structured test results
As of version 0.1, **inferr** includes a select set of parametric and non-parametric statistical tests which are listed below:
- One Sample t Test
- Paired Sample t Test
- Independent Sample t Test
- One Sample Proportion Test
- Two Sample Proportion Test
- One Sample Variance Test
- Two Sample Variance Test
- Binomial Test
- ANOVA
- Chi Square Goodness of Fit Test
- Chi Square Independence Test
- Levene's Test
- Cochran's Q Test
- McNemar Test
- Runs Test for Randomness
These tests are described in more detail in the following sections.
## One Sample t Test
A one sample t-test is used to determine whether a sample of observations comes from a population with a specific mean. The observations must be continuous, independent of each other, approximately distributed and should not contain any outliers.
### Example
Using the hsb data, test whether the average of write differs significantly from 50.
```{r ttest}
infer_os_t_test(hsb, write, mu = 50, alternative = 'all')
```
## Paired t test
A paired (samples) t-test is used when you want to compare the means between two related groups of observations on some continuous dependent variable. In a paired sample test, each subject or entity is measured twice. It can be used to evaluate the effectiveness of training programs or treatments. If the dependent variable is dichotomous, use the McNemar test.
### Examples
Using the hsb data, test whether the mean of read is equal to the mean of write.
```{r pair1}
# Lower Tail Test
infer_ts_paired_ttest(hsb, read, write, alternative = 'less')
# Test all alternatives
infer_ts_paired_ttest(hsb, read, write, alternative = 'all')
```
## Two Independent Sample t Test
An independent samples t-test is used to compare the means of a normally distributed continuous dependent variable for two unrelated groups. The dependent variable must be approximately normally distributed and the cases/subjects in the two groups must be
different i.e. a subject in one group cannot also be a subject of the other group. It can be used to answer whether:
- average number of products produced by two machines differ significantly?
- average salaries of graduate students differ based on gender?
### Example
Using the hsb data, test whether the mean for write is the same for males and females.
```{r ind}
infer_ts_ind_ttest(hsb, female, write, alternative = 'all')
```
## One Sample Test of Proportion
One sample test of proportion compares proportion in one group to a specified population proportion.
### Examples
Using hsb data, test whether the proportion of females is 50%.
```{r os_prop1}
# Using Variables
infer_os_prop_test(hsb, female, prob = 0.5)
```
Using Calculator
```{r os_prop2}
# Calculator
infer_os_prop_test(200, prob = 0.5, phat = 0.3)
```
## Two Sample Test of Proportion
Two sample test of proportion performs tests on the equality of proportions using large-sample statistics. It tests that a categorical variable has the same proportion within two groups or that two variables have the same proportion.
### Examples
#### Using Variables
Using the treatment data, test equality of proportion of two treatments
```{r ts_prop1}
# Using Variables
infer_ts_prop_test(treatment, treatment1, treatment2, alternative = 'all')
```
#### Use Grouping Variable
Using the treatment2 data, test whether outcome has same proportion for male and female
```{r ts_prop2}
# Using Grouping Variable
infer_ts_prop_group(treatment2, outcome, female, alternative = 'all')
```
#### Using Calculator
Test whether the same proportion of people from two batches will pass a review
exam for a training program. In the first batch of 30 participants, 30%
passed the review, whereas in the second batch of 25 participants, 50% passed the
review.
```{r ts_prop3}
# Calculator
infer_ts_prop_calc(n1 = 30, n2 = 25, p1 = 0.3, p2 = 0.5, alternative = 'all')
```
## One Sample Variance Test
One sample variance comparison test compares the standard deviation (variances) to a hypothesized value. It determines whether the standard deviation of a population is equal to a hypothesized value. It can be used to answer the following questions:
- Is the variance equal to some pre-determined threshold value?
- Is the variance greater than some pre-determined threshold value?
- Is the variance less than some pre-determined threshold value?
### Examples
Using the mtcars data, compare the standard deviation of mpg to a hypothesized value.
```{r os_var}
# Lower Tail Test
infer_os_var_test(mtcars, mpg, 0.3, alternative = 'less')
# Test all alternatives
infer_os_var_test(mtcars, mpg, 0.3, alternative = 'all')
```
## Two Sample Variance Test
Two sample variance comparison tests equality of standard deviations (variances). It tests that the standard deviation of a continuous variable is same within two groups or the standard deviation of two continuous variables is equal.
### Example
#### Use Grouping Variable
Using the mtcars data, compare the standard deviation in miles per gallon for automatic and manual vehicles.
```{r ts_var1}
# Using Grouping Variable
infer_ts_var_test(hsb, read, group_var = female, alternative = 'all')
```
#### Using Variables
Using the hsb data, compare the standard deviation of reading and writing scores.
```{r ts_var2}
# Using Variables
infer_ts_var_test(hsb, read, write, alternative = 'all')
```
## Binomial Probability Test
A one sample binomial test allows us to test whether the proportion of successes on a two-level categorical dependent variable significantly differs from a hypothesized value.
### Examples
Using the hsb data, test whether the proportion of females and males are equal.
```{r binom_calc}
# Using variables
infer_binom_test(hsb, female, prob = 0.5)
```
#### Using Calculator
```{r binom_calc2}
# calculator
infer_binom_calc(32, 16, prob = 0.5)
```
## ANOVA
The one-way analysis of variance (ANOVA) is used to determine whether there are any statistically significant differences between the means of two or more independent (unrelated) groups. It tests the null hypothesis that samples in two or more groups are drawn from populations with the same mean values. It cannot tell you which specific groups were statistically significantly different from each other but only that at least two groups were different and can be used only for numerical data.
### Examples
Using the hsb data, test whether the mean of write differs between the three program types.
```{r anova}
infer_oneway_anova(hsb, write, prog)
```
## Chi Square Goodness of Fit Test
A chi-square goodness of fit test allows us to compare the observed sample distribution with expected probability distribution.
It tests whether the observed proportions for a categorical variable differ from hypothesized proportions. The proportion of cases expected in each group of categorical variable may be equal or unequal. It can be applied to any univariate distribution for which you can calculate the cumulative distribution function. It is applied to binned data and the value of the chi square test depends on how the data is binned. For the chi square approximation to be valid, the sample size must be sufficiently large.
### Example
Using the hsb data, test whether the observed proportions for race differs significantly from the
hypothesized proportions.
```{r gof1}
# basic example
infer_chisq_gof_test(hsb, race, c(20, 20, 20 , 140))
```
#### Continuity Correction
```{r gof2}
# using continuity correction
infer_chisq_gof_test(hsb, race, c(20, 20, 20 , 140), correct = TRUE)
```
## Chi Square Test of Independence
A chi-square test is used when you want to test if there is a significant relationship between two nominal (categorical) variables.
### Examples
Using the hsb data, test if there is a relationship between the type of school attended (schtyp) and students' gender (female).
```{r chi1}
infer_chisq_assoc_test(hsb, female, schtyp)
```
Using the hsb data, test if there is a relationship between the type of school attended (schtyp) and students'
socio economic status (ses).
```{r chi2}
infer_chisq_assoc_test(hsb, schtyp, ses)
```
## Levene's Test
Levene's test is used to determine if k samples have equal variances. It is less sensitive to departures from normality and is an alternative to Bartlett's test. This test returns Levene's robust test statistic and the two statistics proposed by Brown and Forsythe that replace the mean in Levene's formula with alternative location estimators. The first alternative replaces the mean with the median and the second alternative replaces the mean with the 10% trimmed mean.
### Examples
#### Use Grouping Variable
Using the hsb data, test whether variance in reading score is same across race.
```{r lev1}
# Using Grouping Variable
infer_levene_test(hsb, read, group_var = race)
```
#### Using Variables
Using the hsb data, test whether variance is equal for reading, writing and
social studies scores.
```{r lev2}
# Using Variables
infer_levene_test(hsb, read, write, socst)
```
## Cochran's Q Test
Cochran's Q test is an extension to the McNemar test for related samples that provides a method for testing for differences between three or more matched sets of frequencies or proportions. It is a procedure for testing if the proportions of 3 or more dichotomous variables are equal in some population. These outcome variables have been measured on the same people or other statistical units.
### Example
The exam data set contains scores of 15 students for three exams (exam1, exam2, exam3). Test if three exams are equally difficult.
```{r cochran}
infer_cochran_qtest(exam, exam1, exam2, exam3)
```
## McNemar Test
McNemar test is a non parametric test created by Quinn McNemar and first published in **Psychometrika** in 1947.
It is similar to a paired t test but applied to a dichotomous dependent variable. It is used to test if a statistically
significant change in proportions have occurred on a dichotomous trait at two time points on the same population. It can
be used to answer whether:
- two products are equally appealing?
- proportion of success vs failure significantly change after treatment?
- proportion of voters significantly change prior to and following a significant political development?
### Examples
Using the hsb data, test if the proportion of students in himath and hiread group is equal.
```{r mc3}
hb <- hsb
hb$himath <- ifelse(hsb$math > 60, 1, 0)
hb$hiread <- ifelse(hsb$read > 60, 1, 0)
infer_mcnemar_test(hb, himath, hiread)
```
```{r mc1}
himath <- ifelse(hsb$math > 60, 1, 0)
hiread <- ifelse(hsb$read > 60, 1, 0)
infer_mcnemar_test(table(himath, hiread))
```
Perform the above test using matrix as input.
```{r mc2}
infer_mcnemar_test(matrix(c(135, 18, 21, 26), nrow = 2))
```
## Runs Test for Randomness
Runs Test can be used to decide if a data set is from a random process. It tests whether observations of a sequence are serially independent i.e. whether they occur in a random order by counting how many runs there are above and below a threshold. A run is defined as a series of increasing values or a series of decreasing values. The number of increasing, or decreasing, values is the length of the run. By default, the median is used as the threshold. A small number of runs indicates positive serial correlation; a large number indicates negative serial correlation.
### Examples
We will use runs test to check regression residuals for serial correlation.
```{r runs1}
# basic example
infer_runs_test(hsb, read)
# drop values equal to threshold
infer_runs_test(hsb, read, drop = TRUE)
# recode data in binary format
infer_runs_test(hsb, read, split = TRUE)
# use mean as threshold
infer_runs_test(hsb, read, mean = TRUE)
# threshold to be used for counting runs
infer_runs_test(hsb, read, threshold = 0)
```