Skip to content

koala0623/OcularSeg

Repository files navigation

OcularSeg 🔥

This repository contains the training/testing codes of our paper entitled "OcularSeg: Accurate and Efficient Multi-Modal Ocular Segmentation in Non-Constrained Scenarios".

Requirements

  • Python>=3.8
  • Pytorch>=1.13.0
  • timm>=0.5

Datasets

If you are interested in utilizing our segmentation annotations for your research experiments, please email us for the access code to Quark Drive.

Experiments

  1. How to test the model
    • Downloading our trained weight from Google Drive, and move it into ./checkpoints
    • modifying some settings in test.py
    • running test.py
  2. How to train the model
    • running train.py

Reference

Some of the codes in this repo are borrowed from:

Citation

If you find our work useful in your research, please consider citing:

@article{zhang2024ocularseg,
  title={OcularSeg: Accurate and Efficient Multi-Modal Ocular Segmentation in Non-Constrained Scenarios},
  author={Zhang, Yixin and Wang, Caiyong and Li, Haiqing and Sun, Xianyun and Tian, Qichuan and Zhao, Guangzhe},
  journal={Electronics},
  volume={13},
  number={10},
  pages={1967},
  year={2024}
}

Questions

Please contact zhangyixin@stu.bucea.edu.cn (Miss Zhang) or wangcaiyong@bucea.edu.cn (Dr. Wang).

About

A Multi-Modal Ocular Segmentation Model

Resources

Stars

Watchers

Forks

Releases

No releases published

Packages

No packages published

Languages