Skip to content

MR-HosseinzadehTaher/Eden

Folders and files

NameName
Last commit message
Last commit date

Latest commit

 

History

89 Commits
 
 
 
 
 
 
 
 
 
 

Repository files navigation


We devise a novel self-supervised learning (SSL) framework that underpins the development of powerful models foundational to medical imaging via learning anatomy. Our framework not only generates highly generalizable pretrained models, called Adam (autodidactic dense anatomical models), but also, in contrast to existing SSL methods, yields dense anatomical embeddings, nicknamed Eve (embedding vectors), preserveing a semantic balance of anatomical diversity and harmony, making them semantically meaningful for anatomy understanding.

Publication

Representing Part-Whole Hierarchies in Foundation Models by Learning Localizability, Composability, and Decomposability from Anatomy via Self-Supervision

Mohammad Reza Hosseinzadeh Taher1, Michael B. Gotway2, Jianming Liang1
1 Arizona State University, 2 Mayo Clinic
IEEE/CVF Conference on Computer Vision and Pattern Recognition (CVPR)

Adam-v2: Paper | Code | Oral Presentation 💥 ${\color{red} {\textbf{Accepted at CVPR 2024 [main conference]}}}$


${\color{blue} {\textbf{Please download the pretrained Adam-v2 PyTorch model as follow. }}}$

Backbone #Params. Download
ConvNeXt-B 89M Link

Towards Foundation Models Learned from Anatomy in Medical Imaging via Self-Supervision

Mohammad Reza Hosseinzadeh Taher1, Michael B. Gotway2, Jianming Liang1
1 Arizona State University, 2 Mayo Clinic
International Conference on Medical Image Computing and Computer Assisted Intervention (MICCAI 2023);
Domain Adaptation and Representation Transfer

Adam-v1: Paper | Code | Oral Presentation 🏆 ${\color{red} {\textbf{Best Paper Award (Runner-up)}}}$


${\color{blue} {\textbf{Please download the pretrained Adam-v1 PyTorch model as follow. }}}$

Backbone #Params. Download
ResNet-50 25.6M Link

Citation

If you use this code or use our pretrained weights for your research, please cite our paper:

@misc{taher2024representing,
      title={Representing Part-Whole Hierarchies in Foundation Models by Learning Localizability, Composability, and Decomposability from Anatomy via Self-Supervision}, 
      author={Mohammad Reza Hosseinzadeh Taher and Michael B. Gotway and Jianming Liang},
      year={2024},
      eprint={2404.15672},
      archivePrefix={arXiv},
      primaryClass={cs.CV}
}

@misc{taher2023foundation,
      title={Towards Foundation Models Learned from Anatomy in Medical Imaging via Self-Supervision}, 
      author={Mohammad Reza Hosseinzadeh Taher and Michael B. Gotway and Jianming Liang},
      year={2023},
      eprint={2309.15358},
      archivePrefix={arXiv},
      primaryClass={cs.CV}
}

Acknowledgement

This research has been supported in part by ASU and Mayo Clinic through a Seed Grant and an Innovation Grant, and in part by the NIH under Award Number R01HL128785. The content is solely the responsibility of the authors and does not necessarily represent the official views of the NIH. This work has utilized the GPUs provided in part by the ASU Research Computing and in part by the Bridges-2 at Pittsburgh Supercomputing Center through allocation BCS190015 and the Anvil at Purdue University through allocation MED220025 from the Advanced Cyberinfrastructure Coordination Ecosystem: Services & Support (ACCESS) program, which is supported by National Science Foundation grants #2138259, #2138286, #2138307, #2137603, and #2138296. The content of this paper is covered by patents pending.

License

Released under the ASU GitHub Project License.