The aim of this project is to generate medical reports from X-ray images. This problem holds a great role when there is a lack of Quality doctors in remote parts of the world, and just lab technicians could take an X-Ray and send it to doctors anywhere across the globe. It would bring in quality healthcare services at cheaper rates to parts of world where doctors would take weeks to reach. This is my final year Major Project.
Data Source: http://academictorrents.com/details/66450ba52ba3f83fbf82ef9c91f2bde0e845aba9
- References have the references used
- Workflow has notebooks which were used for experiments and tuning purposes
- Loss plots has different plots for training and validation purposes
- Training has various training notebooks
- Outputs have output generated from the model
- Checkpoints has model weights
- Tokenizer has a .pickle file containing key value pairs from the tokenizer's vocabulary
- Outputs- Has sample outputs
- Tokenizer- Has Tokenizer weights stored in .pickle file
- Workflow- Has all experiments while before reaching the final model
- checkpoints_2/train-Has all encoder decoder layer weights
- training- Has notebook training notebooks and evaluation using BLEU and Rouge scores
- losses_plots- Has all training and evaluation plots while training
-
Use pretrained weights from Inception V3 network. Extract features from the last CNN layer, as we would be using attention in this model. Remove the last layer of the model, and transform all the input images using these weights.
-
Define an encoder layer. It has a fully connected layer. Pass the pre trained vector from Inception V3 network to featurize images at encoder stage.
-
Now, Lets talk about the impressions. Tokenize the text data. Use top k words, and replace all other words with tokens. Pad sequences to the maximum length.
-
Start: Use the vectors that we got by transforming all images on Inception V3 weights. Pass it through encoder. We define a LSTM/GRU(not bidirectional) decoder. This decoder attends the image to predict the next word.
-
Train: To train the model, pass the saved vectors from images through encoder. Pass the encoder output, hidden states, and token to the decoder network. The decoder's hidden state is passed back to model, and the output is used to calculate the loss. Now pass the target word as the next input to decoder. It is different and worth noting that we do not pass the predicted output but the original output as the next input. This is known as Teacher Forcing. Finally, calculate gradients and backpropagate.
-
While Generating the output, do the same as step above. But here we pass the previous output as input to the next time step of the decoder. Stop at token. Store the weights of attention at every timestep of generating the output.
-
Plot the Attention weight and the part it is focusing at every timestep of input.
-
https://towardsdatascience.com/deep-learning-for-detecting-pneumonia-from-x-ray-images-fc9a3d9fdba8
-
https://towardsdatascience.com/deep-learning-for-detecting-pneumonia-from-x-ray-images-fc9a3d9fdba8
-
https://www.youtube.com/watch?v=MgrTRK5bbsg&list=PLQY2H8rRoyvxcmHHRftsuiO1GyinVAwUg&index=21&t=0s
-
https://conferences.oreilly.com/tensorflow/tf-ca-2019/public/schedule/proceedings
-
https://github.com/wangleihitcs/Papers/tree/master/medical%20report%20generation
-
https://ieeexplore.ieee.org/stamp/stamp.jsp?arnumber=8867873
-
https://github.com/zhjohnchan/awesome-radiology-report-generation
-
https://github.com/omar-mohamed/X-Ray-Report-Generation/tree/master
-
https://gist.github.com/UdiBhaskar/9854346018d151d38e6772cbf8f42bba
-
https://gist.github.com/UdiBhaskar/070e666dbafbe35f011528b748b7d4b0
-
https://www.appliedaicourse.com/course/11/Applied-Machine-learning-course