-
Notifications
You must be signed in to change notification settings - Fork 17
New issue
Have a question about this project? Sign up for a free GitHub account to open an issue and contact its maintainers and the community.
By clicking “Sign up for GitHub”, you agree to our terms of service and privacy statement. We’ll occasionally send you account related emails.
Already on GitHub? Sign in to your account
Standard Error calculation in doctor-patient binary design #145
Comments
Hi DMilikien, Thanks for the question and for taking advantage of the "Issues" feature of GitHub. It helps to know what investigators want and need and to track progress. I'd like to help, but my capacity is limited. It would be nice to know you a little better. If you don't want to add more about yourself on your GitHub profile, send me an email. For one, I'd like to know if you are a statistician or not. That will help me in how I answer your questions. Regarding your first question, "Does the BDG reflect the overall N?" I will point out that the analysis accounts for the number of cases AND the number of readers. So there are two N's. Furthermore, the study design plays a big role. This determines how the variance components are combined. My initial guess is that reader variability is dominating the analysis such that increasing the number of cases will have little to no effect on total variability. You probably need to increase the number of readers. Regarding the second question, you can interrogate split-plot study designs with the software. I think you can push this all the way to groups that have one reader each, like the doctor-patient design you have. Since you refer to the "doctor-patient" study design, I assume you read my paper, "Multireader multicase variance analysis for binary data" (Gallas2007_J-Opt-Soc-Am-A_v24pB70). Given your data, you can take the estimated variance components and size any doctor-patient study using the coefficients in Table 1 (column 2 = per-reader weights or column 3 = per-case weights). You may also be interested in a related paper: "Multireader multicase reader studies with binary agreement data: simulation, analysis, validation, and sizing" (Chen2018_J-Med-Img_v5p031410) Let me know what you think and if you have additional questions. |
I got the Chen reference wrong. It is chen2014_J-Med-Img_v1p031011. |
Thank you!
Doug
Douglas A. Milikien, MS
Principal Consulting Biostatistician
Accudata Solutions, Inc.
886 Oak St
Lafayette CA 94549
Direct: (925) 683-1376
e-mail: dougm@accudatasolutions.com
web: www.accudatasolutions.com <http://www.accudatasolutions.com/>
LinkedIn: https://www.linkedin.com/in/doug-milikien-08a03b1/
Description: logo_accudata_transparent
“To understand God, we must follow Statistics, for these are the measure of His purpose.”-Florence Nightingale
From: Brandon Gallas [mailto:notifications@github.com]
Sent: Wednesday, December 12, 2018 6:49 AM
To: DIDSR/iMRMC
Cc: DMilikien; Author
Subject: Re: [DIDSR/iMRMC] Standard Error calculation in doctor-patient binary design (#145)
I got the Chen reference wrong. It is chen2014_J-Med-Img_v1p031011.
The one I gave is good too!
—
You are receiving this because you authored the thread.
Reply to this email directly, view it on GitHub <#145 (comment)> , or mute the thread <https://github.com/notifications/unsubscribe-auth/AranaTWvseCC-ylzKeK8-DWkC74BsrCkks5u4RdQgaJpZM4ZM5yD> . <https://github.com/notifications/beacon/AranaSnceovds1BI5xrEWqp63vXspBLsks5u4RdQgaJpZM4ZM5yD.gif>
|
Thank you, Dr. Gallas and colleagues for supplying software for MRMC analysis.
I have a study design where I have six readers, in which each read is correct or incorrect(binary). However, each reader has a unique set of patient scans(doctor-patient design). I have created some simulated data under different scenarios of reader-to-reader variability for a fixed overall percent correct. I've imported each of those scenarios into the Java application to get a BDG MRMC-based estimate of the standard error of percent correct. 2 questions come up:
1.) Comparing 2 scenarios with a fixed amount of variation between the "best" reader and the "worst reader" at a fixed overall percent agreement, one with N=252(42 reads per reader) and one with N=402(67 scans per reader), I found that the standard errors computed by iMRMC were remarkably similiar, se=0.0353 vs 0.0343, respectively, even though the difference in N is quite large. Does the BDG reflect the overall N?
2.) I'd like to use the BDG standard error to compute sample size necessary at a given power level for demonstrating superiority of the observed percent correct to an acceptance criterion. Yet, the sample size planning feature in the iMRMC Java app only applies to fully-crossed study designs. Can you please suggest how to use Monte Carlo simulation(or otherwise) to impose this standard error on new datasets of various sizes until I find a sample size that meets the desired power?
Thank you.
The text was updated successfully, but these errors were encountered: