In our research, we delve into the application of federated learning (FL) for the classification of age-related macular degeneration (AMD) using optical coherence tomography image data. Utilizing both residual network and vision transformer encoders, our focus is on the binary classification of normal vs. AMD. Given the challenges posed by heterogeneous data distribution across different institutions, we've integrated four distinct domain adaptation techniques to mitigate domain shift issues.
Our findings underscore the potential of FL strategies, highlighting their ability to rival the performance of centralized models, even when each local model only accesses a fragment of the training data. Of particular note is the Adaptive Personalization FL strategy, which consistently showcased superior performance in our evaluations. This research not only sheds light on the effectiveness of simpler architectures in image classification tasks but also emphasizes the importance of data privacy and decentralization. We believe that our work paves the way for future investigations into more intricate models and diverse FL strategies, aiming for a deeper comprehension of their performance nuances.
We employ various federated learning methodologies in our research. The following are the methods used along with their respective references:
- FedAvg: Link to Paper
- FedProx: Link to Paper
- FedSR: Link to Paper
- FedMRI: Link to Paper
- APFL: Link to Paper
This repository contains datasets for various research purposes. The datasets can be accessed via the following link:
The dataset is organized into three main folders, each representing a different dataset:
dataset
├── 0:
│ ├── train
│ │ ├── AMD
│ │ └── NORMAL
│ └── test
│ ├── AMD
│ └── NORMAL
├── 1:
│ ├── train
│ │ ├── AMD1
│ │ ├── ...
│ │ ├── AMD12
│ │ ├── NORMAL1
│ │ ├── ...
│ │ └── NORMAL12
│ └── test
│ ├── AMD13
│ ├── ...
│ ├── AMD15
│ ├── NORMAL13
│ ├── ...
│ └── NORMAL15
└── 2:
├── train
│ ├── OCTA_3mm
│ └── OCTA_6mm
└── test
├── OCTA_3mm
└── OCTA_6mm
There is an .env file located in ./data/.env. Ensure to set the dataset path in this file before proceeding with any operations.
Certainly! Here's a suggested addition to your GitHub README file for the installation part using the environment.yml
file:
To set up the environment and dependencies required for this project, we provide an environment.yml
file. Follow the steps below to create a Conda virtual environment using this file:
-
Clone the Repository:
git clone git@github.com:QIAIUNCC/FL_UNCC_QIAI.git cd FL_UNCC_QIAI
-
Install Conda: If you haven't installed Conda yet, download and install it from here.
-
Create a Conda Environment: Use the
environment.yml
file to create a new Conda environment:conda env create -f environment.yml
-
Activate the Environment:
conda activate fl_uncc_qiai
-
Run the Code: Now that you have activated the environment, you can run the code in this repository.