New issue

Have a question about this project? Sign up for a free GitHub account to open an issue and contact its maintainers and the community.

By clicking “Sign up for GitHub”, you agree to our terms of service and privacy statement. We’ll occasionally send you account related emails.

Already on GitHub? Sign in to your account

Ask a Clinician! (add a question, get points) #230

Closed
pjbull opened this Issue Nov 17, 2017 · 12 comments

Comments

Projects
None yet
@pjbull
Copy link
Contributor

pjbull commented Nov 17, 2017

This project is about making something useful for clinicians. This is your chance to pull up and get input from radiologists working on the front lines of early detection. We would love to hear what you are wondering about in terms of the workflow at the clinic, and where user input can address thoughts that are coming up for you as you work on the application.

  • What are your questions about a radiologist's process?
  • What are your questions about what information is most valuable to see?
  • What are your questions about how the algorithms will be used?
  • What are your questions about how the interface should work?

The team at ALCF will be coordinating responses to submitted questions through their network of clinicians, researchers, and patients.

Submit a question by commenting directly on this issue. You'll earn 2 community points for each of up to three original, substantive questions.

@pjbull pjbull added this to the 2-feature-building milestone Nov 17, 2017

@pjbull pjbull changed the title Ask a Clinician! (add a question, get some points) Ask a Clinician! (add a question, get points) Nov 17, 2017

@louisgv

This comment has been minimized.

Copy link
Contributor

louisgv commented Nov 18, 2017

  1. How often does radiologists spot error in patient's files/data and what kind of tool do they currently using to validate data and/or fix error? What kind/piece of data are most prone to error? (Looking at data provided in the RSNA specifically)

  2. In the Report/Export view, would clinicians prefer to have as much data cramped into the view-port as possible (for quick scanning, for example) in a way similar to infographic report example or would they prefer spacial line-by-line document format example? (Specifically talking about the RSNA standard)

  3. Is there a process to update a patient's record once it has been saved? If not, how long does it take to update patient record? If instantly, how long does it take to validate if the new data is correct? And how long does it take until patient's treatment get updated according to their new status?

NOTE: The 2nd question targets specifically for digital viewing. Exported file will be formatted to follow the formal convention.

@WGierke

This comment has been minimized.

Copy link
Contributor

WGierke commented Nov 18, 2017

I have some pretty obvious questions:

  1. If you could wish for a piece of software that could help you detecting lung cancer nodules, what would be its most useful features and how would it be different from any possible "competitors" that already exist?

  2. What would be the most important properties such a software has to fulfill such that you would use it on a regular basis (intuitive user interface, responsive, high accuracy, ...?)

@musale

This comment has been minimized.

Copy link
Contributor

musale commented Nov 18, 2017

I have some here too:

  1. What would you like improved/ changed in your current workflow using tools with standard PACS software package with the current workflow in the concept-to-clinic project?

  2. In the lung cancer screening report, is there information in the standard template that is missing and you would want in the report generated by concept-to-clinic project (or omitted)? If there is, which info and why?

  3. In the event the tool being built would allow multiple algorithm results and reports were based on a given algorithm, how would that impact the final decision a clinician makes about the cancer detection?

@isms

This comment has been minimized.

Copy link
Contributor

isms commented Dec 20, 2017

@louisgv @WGierke @musale Thanks for the questions! We'll pass them along and surface interesting responses when we get them.

@hengrumay

This comment has been minimized.

Copy link

hengrumay commented Dec 20, 2017

I wonder if clinicians would appreciate having

  1. an interactive tool that allows feedback cycle e.g. to indicate their hypothesis about a potential mass-detection, the procedures intended for the patient and the outcome of it such that all the information could update the detection algorithm(s) or at the minimum document any outlier situations.
  2. a tool that allows for comparison between current patient CT scan info. e.g. with various (coronal/sagittal/axial) views zoomed to possible detection location(s) and an averaged across some 'normalized' healthy population's data of similar location(s) -- would that be helpful?

Additionally, I wonder what are general heuristics employed by Radiologists when they go about scanning the images for potential anomalies -- e.g. is it a top-down approach based on medical notes, and if so perhaps the algorithms might need to be in tune with sifting out some of this information.

@lamby

This comment has been minimized.

Copy link
Contributor

lamby commented Dec 22, 2017

@hengrumay (I'd love to award you some points for your contribution but you don't seem to have signed up to the competition!)

@hengrumay

This comment has been minimized.

Copy link

hengrumay commented Dec 23, 2017

@lamby I just signed up -- I hope it got through. Either way no worries. Just trying to be helpful! Thanks!

@isms isms modified the milestones: 2-feature-building, 3-packaging Jan 5, 2018

@vessemer

This comment has been minimized.

Copy link
Contributor

vessemer commented Jan 23, 2018

Most CAD systems in clinical use today have their internal threshold set to operate somewhere between 1 to 4 false positives per scan on average. Some systems allow the user to vary the threshold. To make the task more challenging, we included low false positive rates in our evaluation. This determines if a system can also identify a significant percentage of nodules with very few false alarms, as might be needed for CAD algorithms that operate more or less autonomously.

  • Based on the quote from the LUNA16 evaluation page, are there some changes in preference, i.e. which amount of false positives per scan (FPPS) should treat as appropriate? Also, I want point to the fact that sensitivities which correspond to lower FPPS are less stable. It can be observed by adding trivial test time augmentation (TTA), e.g. flip along some axis. This trick allows raising a tail and also reducing the variance of FROC without any other manipulations with an algorithm itself. Following plots, without TTA and with TTA accordingly:

froc_noduleresnet
froc_3dlrcnn

  • It might be helpful (for algorithms training as well) to acquire types of nodules along with diameter and centroids' coordinates. Here I mean following types: Juxta Plural, Juxta Vascular, Isolated.

screenshot from 2018-01-23 01-32-52

  • Whilst we inevitably face false positive cases not all of them are obviously non-pathological errors, at least for me :) Would it be worth to try to cluster them out?
    Random examples of false positives:

screenshot from 2018-01-23 02-15-09

@swarm-ai

This comment has been minimized.

Copy link
Contributor

swarm-ai commented Jan 24, 2018

A couple of questions to inform the work to help radiologists and patients with better software:

  1. Given that search, detection and classification of lung nodules are one part of a pipeline of clinical tasks in the radiological evaluation of at-risk lung cancer patients’ chest CT scans, what are 3 key attributes that you would favor in a lung nodule diagnosis system that integrates with your clinical software and your related clinical workflows?
    for example high accuracy (AUCROC), speed, operational software cost, ease of use, description of classification rationale for each read, incorporation of additional clinical tasks beyond detection/diagnosis etc
  1. What are the key challenges you see to implementation of a lung nodule diagnosis in an actual clinical environment?
    Culture?, system cost? Accuracy? Quality scientific data or studies? Something else?
  1. What type of data and evidence would you require in order to actually use a lung nodule diagnosis program in your clinical practice - a prospective clinical trial, a minimal level of accuracy etc?
@eelvira

This comment has been minimized.

Copy link
Contributor

eelvira commented Jan 25, 2018

  1. Does it make sense to save false-positives as well to check their locations on the next timestamp of the patient's CT?
  2. What is the first thing radiologists notice if the pathologies are barely noticeable, and the algorithm did not mark these areas as probably pathological?
  3. Where is it planned to run the program: directly on the computers of doctors or remotely on the server? Whether it's worth to implement algorithms with worse performance, but if they are also less resource consuming?
@kyounis

This comment has been minimized.

Copy link

kyounis commented Jul 14, 2018

Hi,
How can I sign up for the competition?

@isms

This comment has been minimized.

Copy link
Contributor

isms commented Jul 14, 2018

@kyounis The competition is no longer running.

@isms isms closed this Jul 14, 2018

@isms isms unassigned lamby Oct 17, 2018

Sign up for free to join this conversation on GitHub. Already have an account? Sign in to comment