Skip to content
New issue

Have a question about this project? Sign up for a free GitHub account to open an issue and contact its maintainers and the community.

By clicking “Sign up for GitHub”, you agree to our terms of service and privacy statement. We’ll occasionally send you account related emails.

Already on GitHub? Sign in to your account

recall vs loc curves... should not they always be increasing or flat? #10

Open
timm opened this issue Dec 31, 2016 · 2 comments
Open

recall vs loc curves... should not they always be increasing or flat? #10

timm opened this issue Dec 31, 2016 · 2 comments

Comments

@timm
Copy link
Collaborator

timm commented Dec 31, 2016

e.g

screenshot 2016-12-31 12 36 36

@timm
Copy link
Collaborator Author

timm commented Dec 31, 2016

fyi if u want to report AUC, best to use the definitions from

Yibiao Yang, Yuming Zhou, Jinping Liu, Yangyang Zhao, Hongmin Lu, Lei Xu, Baowen Xu, and Hareton Leung. 2016. Effort-aware just-in-time defect prediction: simple unsupervised models could be better than supervised models. In Proceedings of the 2016 24th ACM SIGSOFT International Symposium on Foundations of Software Engineering (FSE 2016). ACM, New York, NY, USA, 157-168. DOI: https://doi.org/10.1145/2950290.2950353

screenshot 2016-12-31 12 41 38

@timm
Copy link
Collaborator Author

timm commented Dec 31, 2016

i'm interested, of course, in AUC but what devanbu et al do in http://dl.acm.org/citation.cfm?id=2486846 is to report the recall at effort=20%. this throws away much information about the AUC but in terms of an external measure that other people can understand, it is much more intuitive than AUC esoterica. fyi- reporting at 20% maps back to some classic defect prediction work:

"The ability to predict which files in a large software system are most likely to contain the largest numbers of faults in the next release can be a very valuable asset. To accomplish this, a negative binomial regression model using information from previous releases has been developed and used to predict the numbers of faults for a large industrial inventory system. The files of each release were sorted in descending order based on the predicted number of faults and then the first 20% of the files were selected. This was done for each of fifteen consecutive releases, representing more than four years of field usage. The predictions were extremely accurate, correctly selecting files that contained between 71% and 92% of the faults, with the overall average being 83%. In addition, the same model was used on data for the same system's releases, but with all fault data prior to integration testing removed. The prediction was again very accurate, ranging from 71% to 93%, with the average being 84%. Predictions were made for a second system, and again the first 20% of files accounted for 83% of the identified faults. Finally, a highly simplified predictor was considered which correctly predicted 73% and 74% of the faults for the two systems."

  • From Thomas J. Ostrand, Elaine J. Weyuker, and Robert M. Bell. 2004. Where the bugs are. In Proceedings of the 2004 ACM SIGSOFT international symposium on Software testing and analysis (ISSTA '04). ACM, New York, NY, USA, 86-96. DOI=http://dx.doi.org/10.1145/1007512.1007524

Sign up for free to join this conversation on GitHub. Already have an account? Sign in to comment
Labels
None yet
Projects
None yet
Development

No branches or pull requests

1 participant