Skip to content

Developed a Convolutional Autoencoder model for enhancing multimodal neurological image fusion, focusing on preserving key information and tissue edges. Integrated LGCA pooling with attention mechanisms, achieving notable improvements in image clarity using Python and deep learning libraries.

Notifications You must be signed in to change notification settings

joyfulcalf/Medical-Image-Fusion-using-Convolutional-Autoencoder

 
 

Repository files navigation

Medical-Image-Fusion-using-Convolutional-Autoencoder

Technologies Used: Python, Convolutional Neural Networks (CNNs), Autoencoders, Image Processing, Laplacian-Gaussian Concatenation, Attention Mechanisms.

This project involves the development of a Convolutional Autoencoder model designed to enhance the fusion of multimodal neurological images. By integrating Laplacian-Gaussian Concatenation (LGCA) pooling with attention mechanisms, the model aims to preserve critical information and improve image clarity. Implemented in Python using deep learning frameworks, the project showcases significant advancements in medical image processing, particularly in maintaining tissue edges and details essential for clinical applications.

The model is trained and tested on a custom dataset generated from the Harvard Medical School’s whole brain atlas, consisting of CT-MR T2, SPECT-MR T2, and PET-MR T2 pairs of anatomical and functional images.

Model Architecture

Model Architecture image

Image Fusion Framework

Fusion Framework Image

About

Developed a Convolutional Autoencoder model for enhancing multimodal neurological image fusion, focusing on preserving key information and tissue edges. Integrated LGCA pooling with attention mechanisms, achieving notable improvements in image clarity using Python and deep learning libraries.

Resources

Stars

Watchers

Forks

Releases

No releases published

Packages

No packages published

Languages

  • Jupyter Notebook 100.0%