Application of Machine Learning, AI and Data Mining methods, such as YOLOv8 model and Convolutional Neural Networks (CNNs) for building a model capable of detecting tumours in brain CT scans.
-
We used a public dataset available in Kaggle to develop the project. It's publicly available at the following link: Medical Image DataSet: Brain Tumor Detection;
-
We built the project based on an existing Jupyter notebook, also publicly available at Kaggle: Brain Tumor Detection w/Keras YOLO V8;
-
Although the achieved results weren't satisfactory, we constructed a model which CIoU was almost 3x lower than the original model and a mAP almost 7x higher. The optimization tweaks also heavily reduced the training time (more than 6x faster);
-
If you want to see the deployed application, click down below and feel free to test the models with your own instances and visualize a static dashboard about the dataset:
-
Python3 and pip package manager:
sudo apt install python3 python3-pip build-essential python3-dev
-
virtualenv tool:
pip install virtualenv
-
Libraries:
- Machine Learning and Data Mining: Keras, KerasTuner, TensorFlow, imbalanced-learn;
- Computer Vision: OpenCV, KerasCV;
- Data Analysis, Visualization and Manipulation: pandas, Streamlit, Plotly express, Kaleido, seaborn, Matplotlib, numpy;
- Others: PyCryptodome, Pillow, gdown and google-colab.
-
Environments: Jupyter.
In this section, you can see the the detector GUI made with Streamlit.
In this section, you can follow detailed instructions for executing the project.
-
Clone the repository
git clone https://github.com/juliorodrigues07/tumour_detection.git
-
Enter the repository's directory
cd tumour_detection
-
Create a virtual environment
python3 -m venv .venv
-
Activate the virtual environment
source .venv/bin/activate
-
Install the dependencies
pip install -r requirements.txt
-
You first need to be in the src directory to run the command:
streamlit run 1_🏠_Home.py
-
To visualize the notebooks online and run them (Google Colaboratory), click on the following links:
-
To run the notebooks locally, run the commands in the notebooks directory following the template:
jupyter notebook <file_name>.ipynb
.-
EDA (Exploratory Data Analysis):
jupyter notebook 1_eda.ipynb
-
Data Mining:
jupyter notebook brain_tumor_detection_w_keras_yolo_v8.ipynb
-
.
├── README.md <- Project's documentation
├── requirements.txt <- File containing all the required dependencies to run the project
├── plots # Directory containing all the graph plots generated
├── assets # Directory containing images used in README.md and in the deployed app
├── datasets # Directory containing all used or generated datasets in the project
| ├── image_statistics.csv <- Statistical data about the dataset (std, mean, channels, ...)
| ├── labels.csv <- Tumour types and quantities data
| └── coords.csv <- Detections data (coordinates and area)
├── docs # Directory containing all the presentation slides about the project
| ├── Transparencies - Partial I.pdf
| ├── Transparencies - Partial II.pdf
| └── Transparencies - Final.pdf
└── models # Directory containing all generated models in the project
| ├── base.keras <- Trained with vanilla dataset
| ├── reduced.keras <- Trained with reduced dataset
| └── balanced.keras <- Trained with balanced dataset
├── notebooks # Directory containing project's main jupyter notebook
| ├── 1_eda.ipynb
| └── brain_tumor_detection_w_keras_yolo_v8.ipynb
└── src # Directory containing the web application
├── 1_🏠_Home.py <- Main page with the tumour detector
└── pages # Child pages directory
└── 2_📊_Static.py <- Script responsible for generating the static dashboard
-
To uninstall all dependencies, run the following command:
pip uninstall -r requirements.txt -y
-
To deactivate the virtual environment, run the following command:
deactivate
-
To delete the virtual environment, run the following command:
rm -rf .venv