You signed in with another tab or window. Reload to refresh your session.You signed out in another tab or window. Reload to refresh your session.You switched accounts on another tab or window. Reload to refresh your session.Dismiss alert
binomial regression on a constant or combining independent samples, where each sample is a 2x2 table.
case for meta-analysis PR #6632
In the case of rare events, it is likely that zeros are observed, i.e. no event in a sample.
The first paper looks like a good summary of several methods, with formula, and an example with data that can be used to unit test several methods.
I'm not planning to do much for now, but we need a review of options that we can use for zeros in these cases. Current handling of zeros is a bit arbitrary or inconsistent in existing model/functions.
(e.g. contingency tables have an option to add 0.5 to each cell, some functions use 0.5 as continuity correction, PASS recommends adding a "small" number (<0.01) to zeros for power (?) calculations, GLM binomial might work if it "clips" enough. Frith method?)
Piaget‐Rossel, Romain, and Patrick Taffé. 2019. “Meta-Analysis of Rare Events under the Assumption of a Homogeneous Treatment Effect.” Biometrical Journal 61 (6): 1557–74. https://doi.org/10.1002/bimj.201800381.
two more on treatment of zeros
Rücker, Gerta, Guido Schwarzer, James Carpenter, and Ingram Olkin. 2009. “Why Add Anything to Nothing? The Arcsine Difference as a Measure of Treatment Effect in Meta-Analysis with Zero Cells.” Statistics in Medicine 28 (5): 721–38. https://doi.org/10.1002/sim.3511.
Sweeting, Michael J., Alexander J. Sutton, and Paul C. Lambert. 2004. “What to Add to Nothing? Use and Avoidance of Continuity Corrections in Meta-Analysis of Sparse Data.” Statistics in Medicine 23 (9): 1351–75. https://doi.org/10.1002/sim.1761.
The text was updated successfully, but these errors were encountered:
Mantel-Haenszel does very well in their simulations, low bias, confint with good coverage. by far the best method across the board
Standard inverse variance (IV) weighting with zeros is often biased an low coverage.
IV with 0.5 continuity correction does better in some cases, but can also be worse.
they mention (p. 1571) that they tried HKSJ scale adjustment. It improved undercoverage, but still remaining undercoverage, makes cases where coverage is gook without scale adjustment conservative (over coverage)
Binomial regression does well in many cases.
Peto method works fine with small to moderate effects, but not well with larger effects, and with larger imbalance in sample sizes (treatment versus control)
They mention that Peto might be another effect size and not provide an estimate of OR as advertised.
other link functions:
"Finally, it should be noted that modifying the link function to obtain other ESs (i.e., log-link for the RR and identity-link for the RD) yielded numerical issues because none of these link functions insure the probabilities to be containedwithin the 0–1 interval" p. 1571
I didn't try to figure out how MH applies to risk ratio RR and risk difference. They have the formulas and references, but no explanation.
AFAIK: MH is just Chamberlain's conditional Logit (but much older), so should do well even if there is heterogeneity.
But it's based on Logit, so I think it only applies to odds ratio OR. For RR and RD it might just estimate a population average
In several cases, MLE for Binomial and MH are the same as pooling all the samples. "collapsibility"
I didn't check the details. Pooling only works under assumption of correctly specified likelihood, e.g. no heterogeneity.
MH assumes common effect size. (I didn't check their details. Conditional Logit allows for heterogeneity in levels, and estimates an average treatment effect. AFAIU. But it does not have a random effects model structure in treatment effect (e.g. risk difference)
Another note: AFAIU, they estimate FE Binomial model under heterogeneity by including a study/sample/panel specific intercept. (incidental parameter problem ?)
binomial regression on a constant or combining independent samples, where each sample is a 2x2 table.
case for meta-analysis PR #6632
In the case of rare events, it is likely that zeros are observed, i.e. no event in a sample.
The first paper looks like a good summary of several methods, with formula, and an example with data that can be used to unit test several methods.
I'm not planning to do much for now, but we need a review of options that we can use for zeros in these cases. Current handling of zeros is a bit arbitrary or inconsistent in existing model/functions.
(e.g. contingency tables have an option to add 0.5 to each cell, some functions use 0.5 as continuity correction, PASS recommends adding a "small" number (<0.01) to zeros for power (?) calculations, GLM binomial might work if it "clips" enough. Frith method?)
Piaget‐Rossel, Romain, and Patrick Taffé. 2019. “Meta-Analysis of Rare Events under the Assumption of a Homogeneous Treatment Effect.” Biometrical Journal 61 (6): 1557–74. https://doi.org/10.1002/bimj.201800381.
two more on treatment of zeros
Rücker, Gerta, Guido Schwarzer, James Carpenter, and Ingram Olkin. 2009. “Why Add Anything to Nothing? The Arcsine Difference as a Measure of Treatment Effect in Meta-Analysis with Zero Cells.” Statistics in Medicine 28 (5): 721–38. https://doi.org/10.1002/sim.3511.
Sweeting, Michael J., Alexander J. Sutton, and Paul C. Lambert. 2004. “What to Add to Nothing? Use and Avoidance of Continuity Corrections in Meta-Analysis of Sparse Data.” Statistics in Medicine 23 (9): 1351–75. https://doi.org/10.1002/sim.1761.
The text was updated successfully, but these errors were encountered: