Skip to content

Binary classification of pathological heartbeats from ECG signals using 1D CNNs in PyTorch

License

Notifications You must be signed in to change notification settings

ChryssaNab/ECG-Heartbeat-Classification

Folders and files

NameName
Last commit message
Last commit date

Latest commit

 

History

77 Commits
 
 
 
 
 
 
 
 
 
 
 
 
 
 

Repository files navigation

Classifying Pathological Heartbeats from
ECG Signals

  1. Project Description
  2. Setup
  3. Data Configuration
  4. Execution
  5. Pre-training
  6. Baseline Individual Classifiers
  7. Fine-tuning Classifiers
  8. Team

In this project, we train 1D Convolutional Neural Networks (CNNs) for binary classification of ECG beats into normal and abnormal categories. Initially, we pre-train a generic network on a collection of patients' ECGs sourced from the MIT-BIH arrhythmia database [1]. Subsequently, we fine-tune the model for each patient separately. Finally, we evaluate the performance of the fine-tuned models against individual networks trained from scratch solely on the ECG data of a single patient. This evaluation aims to assess the overall effectiveness of transfer learning and pre-trained knowledge for the given task.

The current project was implemented in the context of the course "Machine Learning" taught by Prof. Herbert Jaeger at University of Groningen. For a comprehensive overview of the methodology and final results, please refer to the Report.


1. We assume that Python3 is already installed on the system. The code has been tested on Python version 3.10, though it should also be compatible with earlier versions.

2. Clone this repository:

$ git clone https://github.com/ChryssaNab/ECG-Heartbeat-Classification.git
$ cd ECG-Heartbeat-Classification

3. Create a new Python environment and activate it:

$ python3 -m venv env
$ source env/bin/activate

4. Modify the requirements.txt file:

If your machine does NOT support CUDA, add the following line at the top of the requirements.txt file:

--extra-index-url https://download.pytorch.org/whl/cpu

If your machine does support CUDA, add the following line instead, replacing 115 with the CUDA version your machine supports:

--extra-index-url https://download.pytorch.org/whl/cu115

5. Install necessary requirements:

$ pip install wheel
$ pip install -r requirements.txt

Download the dataset from https://www.kaggle.com/datasets/mondejar/mitbih-database and copy the contents to the parent directory under a folder named dataset/mitbih_database/.


The primary execution script for the entire project is the main.py file within the src/ directory. The possible arguments for configuring and training the models are specified within the opts.py script. To view usage information run the following command:

$ python3 src/main.py -h

Running on cpu
usage: main.py [-h] [--data_path DATA_PATH] [--output_path OUTPUT_PATH] [--state STATE] [--selected_patients_fine_tuning SELECTED_PATIENTS_FINE_TUNING [SELECTED_PATIENTS_FINE_TUNING ...]]
               [--input_size INPUT_SIZE] [--pretrain_path PRETRAIN_PATH] [--num_blocks NUM_BLOCKS] [--block_channels BLOCK_CHANNELS] [--kernel_size KERNEL_SIZE] [--optimizer OPTIMIZER]
               [--lr_scheduler LR_SCHEDULER] [--weight_decay WEIGHT_DECAY] [--early_stopping] [--n_epochs N_EPOCHS] [--batch_size BATCH_SIZE] [--learning_rate LEARNING_RATE]
               [--weighted_sampling WEIGHTED_SAMPLING]

options:
  -h, --help            show this help message and exit
  --data_path DATA_PATH
                        The data directory path under which the dataset lies.
  --output_path OUTPUT_PATH
                        The output directory path where the checkpoints and log files are created.
  --state STATE         (pre-training | baseline individuals | fine-tuning individuals)
  --selected_patients_fine_tuning SELECTED_PATIENTS_FINE_TUNING [SELECTED_PATIENTS_FINE_TUNING ...]
                        The list of the selected patients earmarked for the experiments.
                        Only applies to the baseline individual models and fine-tuning models that target certain patients.
  --input_size INPUT_SIZE
                        The size of each pulse-width window
  --pretrain_path PRETRAIN_PATH
                        The pre-trained model checkpoint (.pth)
  --num_blocks NUM_BLOCKS
                        Number of blocks
  --block_channels BLOCK_CHANNELS
                        Block channels
  --kernel_size KERNEL_SIZE
                        The convolution kernel size of CNN
  --optimizer OPTIMIZER
                        (Adam | SGD)
  --lr_scheduler LR_SCHEDULER
                        (reducelr | cycliclr | cosAnnealing)
  --weight_decay WEIGHT_DECAY
                        Weight decay hyperparameter value of optimizer
  --early_stopping      Set to TRUE only for baseline or fine-tuning mode.
  --n_epochs N_EPOCHS   The maximum number of total epochs to run.
  --batch_size BATCH_SIZE
                        Batch size used during pre-training.
  --learning_rate LEARNING_RATE
                        Initial learning rate
  --weighted_sampling WEIGHTED_SAMPLING
                        Enable weighted sampling during training.

Note that for the individuals and fine-tuning phases, the parameters --batch_size, --learning_rate, and --weighted_sampling are determined through a grid-search approach for each patient separately.


To perform supervised pre-training on all patients using the default settings, run the following command:

$ python3 src/main.py --state pre-training

Executing this command initiates the pre-training phase, using the hyperparameters specified in the opts.py script. Resultantly, a folder named output/ will be generated within the parent directory, containing model state checkpoints for each epoch and three log files encompassing metrics like loss, accuracy, and other evaluations for training, validation, and test sets. The output folder location can be specified using the --output_path flag, while other hyperparameters in opts.py can be adjusted accordingly.


In our baseline models, we train a CNN model from scratch for each patient individually, using fully-supervised learning mode, without integrating pre-trained knowledge. To initiate this process, execute the following command:

$ python3 src/main.py --state individuals

Executing this command initiates experiments where we individually train a CNN model for each selected patient, employing the optimal hyperparameter set determined through grid-search. Consequently, a folder named individuals/ will be generated within the existing output/ directory, comprising one sub-folder per patient. Each patient's folder will contain identical files as those described in the pre-training section. The output folder location can be specified using the --output_path flag.


To leverage transfer learning and conduct fine-tuning for each individual patient, execute the following command:

$ python3 src/main.py --state fine_tuning --pretrain_path ./output/save_<x>.pth

In this command, the --pretrain_path argument indicates the model checkpoint intended for fine-tuning. We retain the model checkpoint from the epoch with the lowest validation loss during pre-training. Replace <x> with the corresponding epoch number.

Executing this command initiates experiments where we optimize the pre-trained CNN model for each patient within our curated subset separately. This process generates the fine_tuning/ folder within the existing output/ directory, comprising individual folders for each patient. Each patient's folder contains identical files as those described in the pre-training section. You can designate the output folder using the --output_path flag.


References

[1] G. B. Moody and R. G. Mark (2001). The impact of the MIT-BIH Arrhythmia Database. In IEEE Engineering in Medicine and Biology Magazine (pp. 45-50). DOI: 10.1109/51.932724.


Releases

No releases published

Packages

No packages published

Languages