Skip to content

Talks at the Center for Health Care Research & Policy and elsewhere on "Rethinking Statistical Significance"

Notifications You must be signed in to change notification settings

THOMASELOVE/rethink

Repository files navigation

"Rethinking Statistical Significance"

Thomas E. Love, Ph.D., Email: Thomas dot Love at case dot edu

2022-11-16 Talk to Case Neuroscience Society

Slides from my talk are here.

2018-10-10 discussion with KL2 scholars

Most of the material from the 2018-03-16 version of the talk was incorporated into the materials I used to lead a discussion on many related topics, called "Statistical Significance and Clinical Research" for KL2 scholars on 2018-10-10.

The Brian Wansink Story

In addition to the readings below, you might to look at the following items related to the warm topic of the Brian Wansink story.

2018-03-16 version of the Talk, at the Center for Health Care Research and Policy

The slides from the 2018-03-16 version are available in PDF and were built using R Markdown.

What I Used to Teach (and think I was doing well...)

  • Null hypothesis significance testing is here to stay.
    • Learn how to present your p value so it looks like what everyone else does
    • Think about "statistically detectable" rather than "statistically significant"
    • Don't accept a null hypothesis, just retain it.
  • Use point and interval estimates
    • Try to get your statements about confidence intervals right (right = just like I said it)
  • Use Bayesian approaches when they seem appropriate
    • But look elsewhere for people to teach/do that stuff
  • Use simulation to help you understand non-standard designs
    • But, again, look elsewhere for examples
  • Power is basically a hurdle to overcome in a grant application
    • Retrospective power calculations are a waste of time and effort

What I Think I Think Now

  • Null hypothesis significance testing is much harder than I thought.
    • The null hypothesis is almost never a real thing.
    • Rather than rejiggering the cutoff, I would mostly abandon the p value as a summary
    • Replication is far more useful than I thought it was.
  • Some hills aren't worth dying on.
    • Think about uncertainty intervals more than confidence or credible intervals
    • Retrospective calculations about Type S (sign) and Type M (magnitude) errors can help me illustrate ideas.
  • Which method to use is far less important than finding better data
    • The biggest mistake I make regularly is throwing away useful data
    • I'm not the only one with this problem.
  • The best thing I do most days is communicate more clearly.
    • When stuck in a design, I think about how to get better data.
    • When stuck in an analysis, I try to turn a table into a graph.
  • I have A LOT to learn.

Sources of Material in the Talk (plus a few things that didn't make the final cut)

From FiveThirtyEight.com

Nature.com

NOVA and NPR

Andrew Gelman's blog "Statistical Modeling, Causal Inference, and Social Science"

From Jeff Leek, and Simply Statistics

On Reproducible Research

Cartoons

Tweets

About

Talks at the Center for Health Care Research & Policy and elsewhere on "Rethinking Statistical Significance"

Resources

Stars

Watchers

Forks

Releases

No releases published

Packages

No packages published