Skip to content
New issue

Have a question about this project? Sign up for a free GitHub account to open an issue and contact its maintainers and the community.

By clicking “Sign up for GitHub”, you agree to our terms of service and privacy statement. We’ll occasionally send you account related emails.

Already on GitHub? Sign in to your account

Fix typos in 06-t-Tests #3

Open
wants to merge 1 commit into
base: master
Choose a base branch
from
Open
Changes from all commits
Commits
File filter

Filter by extension

Filter by extension

Conversations
Failed to load comments.
Loading
Jump to
Jump to file
Failed to load files.
Loading
Diff view
Diff view
4 changes: 2 additions & 2 deletions 06-ttests.Rmd
Original file line number Diff line number Diff line change
Expand Up @@ -115,7 +115,7 @@ Let's start small and work through some examples. Imagine your sample mean is 5.

Let's say you take another sample, do you think the mean will be 5 every time, probably not. Let's say the mean is 6. So, what can $t$ be here? It will be a positive number, because 6-5= +1. But, will $t$ be +1? That depends on the standard error of the sample. If the standard error of the sample is 1, then $t$ could be 1, because 1/1 = 1.

If the sample standard error is smaller than 1, what happens to $t$? It get's bigger right? For example, 1 divided by 0.5 = 2. If the sample standard error was 0.5, $t$ would be 2. And, what could we do with this information? Well, it be like a measure of confidence. As $t$ get's bigger we could be more confident in the mean difference we are measuring.
If the sample standard error is smaller than 1, what happens to $t$? It gets bigger right? For example, 1 divided by 0.5 = 2. If the sample standard error was 0.5, $t$ would be 2. And, what could we do with this information? Well, it be like a measure of confidence. As $t$ gets bigger we could be more confident in the mean difference we are measuring.

Can $t$ be smaller than 1? Sure, it can. If the sample standard error is big, say like 2, then $t$ will be smaller than one (in our case), e.g., 1/2 = .5. The direction of the difference between the sample mean and population mean, can also make the $t$ become negative. What if our sample mean was 4. Well, then $t$ will be negative, because the mean difference in the numerator will be negative, and the number in the bottom (denominator) will always be positive (remember why, it's the standard error, computed from the sample standard deviation, which is always positive because of the squaring that we did.).

Expand Down Expand Up @@ -759,7 +759,7 @@ ggplot(t_df,aes(x=ts, group=dfs, color=dfs))+
```


Notice that the red distribution for $df$ =4, is a little bit shorter, and a little bit wider than the bluey-green distribution for $df$ = 100. As degrees of freedom increase, the $t$-distribution gets taller (in the middle), and narrower in the range. It get's more peaky. Can you guess the reason for this? Remember, we are estimating a sample statistic, and degrees of freedom is really just a number that refers to the number of subjects (well minus one). And, we already know that as we increase $n$, our sample statistics become better estimates (less variance) of the distributional parameters they are estimating. So, $t$ becomes a better estimate of it's "true" value as sample size increase, resulting in a more narrow distribution of $t$s.
Notice that the red distribution for $df$ =4, is a little bit shorter, and a little bit wider than the bluey-green distribution for $df$ = 100. As degrees of freedom increase, the $t$-distribution gets taller (in the middle), and narrower in the range. It gets more peaky. Can you guess the reason for this? Remember, we are estimating a sample statistic, and degrees of freedom is really just a number that refers to the number of subjects (well minus one). And, we already know that as we increase $n$, our sample statistics become better estimates (less variance) of the distributional parameters they are estimating. So, $t$ becomes a better estimate of it's "true" value as sample size increase, resulting in a more narrow distribution of $t$s.

There is a slightly different $t$ distribution for every degrees of freedom, and the critical regions associated with 5% of the extreme values are thus slightly different every time. This is why we report the degrees of freedom for each t-test, they define the distribution of $t$ values for the sample-size in question. Why do we use n-1 and not n? Well, we calculate $t$ using the sample standard deviation to estimate the standard error or the mean, that estimate uses n-1 in the denominator, so our $t$ distribution is built assuming n-1. That's enough for degrees of freedom...

Expand Down