Skip to content
New issue

Have a question about this project? Sign up for a free GitHub account to open an issue and contact its maintainers and the community.

By clicking “Sign up for GitHub”, you agree to our terms of service and privacy statement. We’ll occasionally send you account related emails.

Already on GitHub? Sign in to your account

Deep Learning at Chest Radiography: Automated Classification of Pulmonary Tuberculosis by Using Convolutional Neural Networks #366

Open
enricoferrero opened this issue Apr 30, 2017 · 3 comments

Comments

@enricoferrero
Copy link
Contributor

enricoferrero commented Apr 30, 2017

http://dx.doi.org/10.1148/radiol.2017162326

Purpose
To evaluate the efficacy of deep convolutional neural networks (DCNNs) for detecting tuberculosis (TB) on chest radiographs.
Materials and Methods
Four deidentified HIPAA-compliant datasets were used in this study that were exempted from review by the institutional review board, which consisted of 1007 posteroanterior chest radiographs. The datasets were split into training (68.0%), validation (17.1%), and test (14.9%). Two different DCNNs, AlexNet and GoogLeNet, were used to classify the images as having manifestations of pulmonary TB or as healthy. Both untrained and pretrained networks on ImageNet were used, and augmentation with multiple preprocessing techniques. Ensembles were performed on the best-performing algorithms. For cases where the classifiers were in disagreement, an independent board-certified cardiothoracic radiologist blindly interpreted the images to evaluate a potential radiologist-augmented workflow. Receiver operating characteristic curves and areas under the curve (AUCs) were used to assess model performance by using the DeLong method for statistical comparison of receiver operating characteristic curves.
Results
The best-performing classifier had an AUC of 0.99, which was an ensemble of the AlexNet and GoogLeNet DCNNs. The AUCs of the pretrained models were greater than that of the untrained models (P < .001). Augmenting the dataset further increased accuracy (P values for AlexNet and GoogLeNet were .03 and .02, respectively). The DCNNs had disagreement in 13 of the 150 test cases, which were blindly reviewed by a cardiothoracic radiologist, who correctly interpreted all 13 cases (100%). This radiologist-augmented approach resulted in a sensitivity of 97.3% and specificity 100%.
Conclusion
Deep learning with DCNNs can accurately classify TB at chest radiography with an AUC of 0.99. A radiologist-augmented approach for cases where there was disagreement among the classifiers further improved accuracy.

Just popped up in my Twitter feed and didn't find it in the issues - not sure if it's already been discussed? Looks interesting.

@agitter
Copy link
Collaborator

agitter commented Apr 30, 2017

I updated to include the abstract.

There has been a lot of results in medical imaging since our first draft of the Categorize section. Ideally we would update this section. I don't think we've discussed papers like #151 and #207 based on a quick search for the first authors' last names.

@alxndrkalinin
Copy link
Contributor

@agitter I've been busy in the last few days, but I am trying to draft updates for both imaging in the Categorize and morphological phenotypes in the Study. I will try to submit PRs by the end of this week.

@agitter
Copy link
Collaborator

agitter commented May 3, 2017

@alxndrkalinin Great, that timing should work. If I can pull together the last few blocks of text (abstract, Study/Treat intros) on time, we'd like to ask all co-authors to review and approve in the next week or two.

I can update the conclusions as needed after you discuss these papers.

agitter pushed a commit to agitter/deep-review that referenced this issue Nov 10, 2020
merges manubot/rootstock#366

clarifies that
1. OWNER can be an organization!
2. no additional configuration is needed for github actions.
Sign up for free to join this conversation on GitHub. Already have an account? Sign in to comment
Projects
None yet
Development

No branches or pull requests

3 participants