Skip to content
New issue

Have a question about this project? Sign up for a free GitHub account to open an issue and contact its maintainers and the community.

By clicking “Sign up for GitHub”, you agree to our terms of service and privacy statement. We’ll occasionally send you account related emails.

Already on GitHub? Sign in to your account

Sample size calculation with wrong zalpha? #50

Closed
patrickEinz opened this issue Apr 9, 2019 · 3 comments
Closed

Sample size calculation with wrong zalpha? #50

patrickEinz opened this issue Apr 9, 2019 · 3 comments

Comments

@patrickEinz
Copy link

@patrickEinz patrickEinz commented Apr 9, 2019

I tried to reproduce the sample size calculations in Table 4 of the Obuchowski paper (2004) for a single ROC curve. For a significance level of 0.05, an expected AUC of 0.7, a desired power of 0.9 and kappa = 1, the sample size calculation should result in 33 patients for each of the two groups.

However,
power.roc.test(auc=0.7, sig.level=0.05, power=0.9, kappa=1.0)
gives ncases = ncontrols = 40.21369 as a result.

Maybe the problem is that inside the function, the z-value for the significance level is calculated by
zalpha <- qnorm(sig.level),
which gives the lower alpha percentile (-1.64 instead of 1.64), not the upper one. I think it should be:
zalpha <- qnorm(sig.level, lower.tail = F) or, of course
zalpha <- qnorm(1 - sig.level)

Thank you very much for your work and for maintaining this great package!

@xrobin

This comment has been minimized.

Copy link
Owner

@xrobin xrobin commented Apr 10, 2019

Thanks for your report!

The Obuchowski (2004) paper describes one-sided tests, in which case pROC returns the expected results:

> power.roc.test(auc=0.7, sig.level=0.05, power=0.9, kappa=1.0, alternative="one.sided")

     One ROC curve power calculation 

         ncases = 32.65397
      ncontrols = 32.65397
            auc = 0.7
      sig.level = 0.05
          power = 0.9

However you are right this is looking weird. I will look into it.

xrobin added a commit that referenced this issue Apr 10, 2019
@xrobin xrobin added this to To do in Improve pROC Nov 30, 2019
xrobin added a commit that referenced this issue Dec 1, 2019
@xrobin

This comment has been minimized.

Copy link
Owner

@xrobin xrobin commented Dec 1, 2019

This is now fixed for the one ROC curve case. Note that resulting values are unchanged and were correct already.

I still need to look into it for the two ROC curves case, where the same mistake apparently occurs.

xrobin added a commit that referenced this issue Dec 1, 2019
xrobin added a commit that referenced this issue Dec 1, 2019
…s (issue #50)
xrobin added a commit that referenced this issue Dec 1, 2019
xrobin added a commit that referenced this issue Dec 1, 2019
…arameters (issue #50)
@xrobin

This comment has been minimized.

Copy link
Owner

@xrobin xrobin commented Dec 1, 2019

Fixed in the 2 ROC curves and list of parameters cases too. It was a good occasion to cleanup the code at the same time.

Again, no difference in the output. But clearer, and more appropriate use of the distributions.

Thanks again for reporting this inconsistency and helping make pROC better.

@xrobin xrobin closed this Dec 1, 2019
Improve pROC automation moved this from To do to Done Dec 1, 2019
xrobin added a commit that referenced this issue Dec 1, 2019
Sign up for free to join this conversation on GitHub. Already have an account? Sign in to comment
Projects
Improve pROC
  
Done
2 participants
You can’t perform that action at this time.