Technologies Used: Python, Convolutional Neural Networks (CNNs), Autoencoders, Image Processing, Laplacian-Gaussian Concatenation, Attention Mechanisms.
This project involves the development of a Convolutional Autoencoder model designed to enhance the fusion of multimodal neurological images. By integrating Laplacian-Gaussian Concatenation (LGCA) pooling with attention mechanisms, the model aims to preserve critical information and improve image clarity. Implemented in Python using deep learning frameworks, the project showcases significant advancements in medical image processing, particularly in maintaining tissue edges and details essential for clinical applications.
The model is trained and tested on a custom dataset generated from the Harvard Medical School’s whole brain atlas, consisting of CT-MR T2, SPECT-MR T2, and PET-MR T2 pairs of anatomical and functional images.