Skip to content
/ MMFDA Public

Source code and data for the paper: Integration of Multimodal Data from Disparate Sources for Identifying Disease Subtypes

License

Notifications You must be signed in to change notification settings

ky-zhou/MMFDA

Folders and files

NameName
Last commit message
Last commit date

Latest commit

 

History

3 Commits
 
 
 
 
 
 

Repository files navigation

(MMFDAs) Multimodal Fusion Dense Autoencoders for Identifying Disease Subtypes

Introduction

This repository is for our submitted paper for MDPI-Biology '[Integration of Multimodal Data from Disparate Sources for Identifying Disease Subtypes]'.

Installation

This repository is based on PyTorch 1.4 and CUDA 10.0.

For installing PyTorch combining with the CUDA toolkit, please follow the official instructions in here. The code is tested under PyTorch 1.4 and Python 3.6 on Ubuntu 18.04.

Usage

  1. Download this repository. And go to folder:
   cd code
  1. Download the data here and put in the data folder. An illustration of the folder organization is:

    -root
    --data
    ---gbm
    ---laml
    ---paad
  2. Train and test the model.
    Run complete fusion model (CFA in the paper):

    python main_aec.py

    Run incomplete fusion model (IFA in the paper):

    python main_aec.py --pred_missing

    Run complete fusion model with 2 existing modalities (CFA-2M in the paper):

    python main_aec_2m.py

    Run single modality model (SMA in the paper), change which modality to use m_use = m* in main_aec_single.py:

    python main_aec_single.py

About

Source code and data for the paper: Integration of Multimodal Data from Disparate Sources for Identifying Disease Subtypes

Resources

License

Stars

Watchers

Forks

Releases

No releases published

Packages

No packages published

Languages