A Shiny web application for fitting, diagnosing, and interpreting count regression models. Upload your data, select predictors, and receive model output, assumption checks, and plain-language interpretations — all in one interface.
- Six count regression models: Poisson, Quasi-Poisson, Negative Binomial, ZIP, ZINB, and Tweedie
- Comprehensive diagnostics: RQR plots, influence diagnostics, VIF, and goodness-of-fit tests
- Interaction analysis: simple slopes, estimated marginal means, emtrends, contrasts, and Johnson-Neyman plots
- Automated plain-language interpretation of model results
- Downloadable PNG exports for all plots
Install the following R packages before running the app:
install.packages(c( "shiny", "shinyWidgets", "tidyverse", "ggeffects", "MASS", "pscl", "statmod", "tweedie", "COMPoissonReg", "emmeans", "magrittr", "parameters", "broom", "car", "GGally", "patchwork", "DHARMa", "glue" ))
The app includes four built-in datasets from class that can be loaded directly from the sidebar without uploading a file:
| Dataset | Description |
|---|---|
| Armadillo (McMillan et al.) | Armadillo counts by hunter age and trek frequency (n = 38) |
| Monkey (McMillan et al.) | Monkey hunting counts from the same study |
| Greenberg 26 | Count data from Greenberg (n = 26) |
| CS Replication | Replication dataset used in class |
From the count-app/ directory:
shiny::runApp(".")
Or from the terminal:
Rscript -e "shiny::runApp('.')"
Link: https://camilo-gc-q.github.io/Statistical-Project/vignette.html
A worked example using the app is available in vignette.qmd. It walks through a full count regression analysis of the McMillan Ache Armadillo dataset, including model selection, diagnostics, and interaction analysis using the Negative Binomial model.
To render it locally, open vignette.qmd in RStudio and click Render, or run:
quarto render vignette.qmd
Unit tests are written with shinytest2 and live in tests/testthat/. To run them, open R from the count-app/ directory and run...
shinytest2::test_app(".")
The test suite covers: app load, five model types (Poisson, Quasi-Poisson, Negative Binomial, ZIP, ZINB), single predictor, interaction terms, and interaction without a main effect.
| Input | Description |
|---|---|
| Upload CSV File | Load your dataset in .csv format |
| Select Response Variable | Choose the count outcome variable |
| Select Predictor Variable(s) | Choose predictors; interaction terms (e.g. Age × Treks) can be selected directly |
| Select Offset Variable | Optional — for rate models (e.g. log population exposure) |
| Scale Variable(s) | Optional — z-score standardize continuous predictors before fitting |
| Select Model to Fit | Choose from six count regression models |
| Fit Model | Fit the selected model |
- Displays the first rows of the uploaded dataset
- Summary statistics: mean, variance, min, max, proportion of zeros
- Count distribution histogram
- Pairwise scatterplot matrix with model-appropriate smooths
- Coefficient correlation matrix from the fitted model
- RQR Plot: Randomized quantile residual diagnostics with checks for normality, dispersion, excess zeros, and mean-variance relationship; includes model recommendation
- VIF Table: Generalized VIF for multicollinearity (supports interaction models)
- Assumption Checks: Eight-point Poisson assumption evaluation — overdispersion, linearity, zero inflation, events-per-predictor, goodness-of-fit, and more
- Influence Diagnostics: Leverage, Cook's distance, and DFFITS plots with reference thresholds
- Zero-Inflation Test: DHARMa-based simulation test for excess zeros (shown for non-zero-inflated models as a model selection check)
- Incidence Rate Ratio (IRR) table with exponentiated coefficients, 95% CIs, and p-values
- Plain-language interpretation of:
- Main effects
- Interaction terms
- Zero-inflation component (ZIP/ZINB only)
Requires at least one interaction term in the model.
- Simple Slopes Plot: Predicted outcome across the focal predictor at low (−1 SD) and high (+1 SD) moderator levels
- Estimated Marginal Means: EMM at specific predictor/moderator combinations with interpretation
- Contrasts of Marginal Means: Pairwise comparisons between EMM cells
- Marginal Effects (emtrends): Slope of the focal predictor at each moderator level
- Contrasts of Marginal Effects: Tests of differences in slopes across moderator levels
- Johnson-Neyman Plot: Region of significance along the moderator's range
| Model | Use Case |
|---|---|
| Poisson | Standard count data with equidispersion |
| Quasi-Poisson | Overdispersed counts (adjusts standard errors) |
| Negative Binomial | Overdispersed counts (estimates dispersion parameter) |
| Zero-Inflated Poisson (ZIP) | Excess zeros + Poisson-distributed counts |
| Zero-Inflated Negative Binomial (ZINB) | Excess zeros + overdispersed counts |
| Tweedie | Compound Poisson-Gamma; power parameter estimated via profile likelihood |
app.R— Main Shiny applicationR/poisson.R— Poisson model fitting and IRR tablequasi_poisson2.R— Quasi-Poisson model fittingnb.R— Negative Binomial model fittingzip.R— ZIP model fittingzinb.R— ZINB model fittingtweedie.R— Tweedie model fitting with power estimationcompois.R— COM-Poisson model fittingplotRQR.R— Randomized quantile residual plotsplotInfluence.R— Influence diagnostic plotsplotResiduals.R— Residual diagnostic plotspairwise_plots.R— Pairwise scatterplot matrixassumptions.R— Poisson assumption checksemmeans.R— EMM, emtrends, and contrast tablesinterpretation.R— Plain-language model interpretationjohnson_neyman.R— Johnson-Neyman floodlight analysiscorr_matrix.R— Coefficient correlation matrixsummary.R— Count response summary statistics
- Prompt: We used Claude to help with the structure and syntax of Shiny UI components, since we were unfamiliar with the shiny and shinyWidgets packages.
Outcome: AI provided the structure (sidebar, tabsetPanel, renderUI patterns). We designed the content of each tab and what inputs/outputs were needed. AI helped us express that in Shiny syntax we hadn't used before.
- Prompt: We used Claude to help understand how to source and integrate separate R files (e.g., poisson.R, emmeans.R, johnson_neyman.R, etc) into the main app.R.
Outcome: AI explained howsource()works within a Shiny app and how to pass reactive data into external functions. We wrote the logic and statistical content of each module ourselves; AI helped with the connection between them.
- Prompt: We used Claude throughout development to debug error messages in R and Shiny, particularly around reactive expressions, NULL outputs, and model fitting edge cases.
Outcome: AI helped identify the source of specific errors. In each case we reviewed the fix and made sure we understood why it worked before applying it. Some suggestions were rejected or modified when they didn't fit the context of the app.
- Prompt: We asked Claude to go through all R files for edge cases where the code might crash or behave unexpectedly.
Outcome: Several issues were flagged (duplicate output definition, a non-existent input reference, interaction term handling in VIF). We reviewed each one and applied the fixes we agreed were valid. We rejected suggestions that added unnecessary complexity to the functionality.
- Prompt: We used Claude to help set up shinytest2 for our Shiny app and debug why tests kept failing.
Outcome: The root issue (Shiny suspending outputs in hidden tabs) was identified through back-and-forth debugging. We implemented theoutputOptions(..., suspendWhenHidden = FALSE)fix after understanding why it was necessary. We wrote the structure and logic of all 9 tests ourselves.