This is the implementation of the paper "Inspector gaze-guided multitask learning for explainable structural damage assessment".
- Clone this repo:
git clone https://github.com/itschenyu/XIDLE-Net.git
cd XIDLE-Net
- Please download the dataset from here and then place it in
./XIDLE-Net/dataset.
- Please download pre-trained weights on ImageNet-22K from here and place it in
./XIDLE-Net/Module/model_data/.
- Please download the XIDLE-Net model weight from here and then place it in
./XIDLE-Net/Module/model_weight/.
python RUN.py
Evaluating the model on the test set:
python TEST.py
If XIDLE-Net and the eye gaze dataset are helpful to you, please cite them as:
@article{https://doi.org/10.1111/mice.70131,
author = {Zhang, Chenyu and Liu, Charlotte and Li, Ke and Yin, Zhaozheng and Qin, Ruwen},
title = {Inspector gaze-guided multitask learning for explainable structural damage assessment},
journal = {Computer-Aided Civil and Infrastructure Engineering},
volume = {40},
number = {30},
pages = {5824-5841},
doi = {https://doi.org/10.1111/mice.70131},
url = {https://onlinelibrary.wiley.com/doi/abs/10.1111/mice.70131},
eprint = {https://onlinelibrary.wiley.com/doi/pdf/10.1111/mice.70131},
year = {2025}
}
Part of the codes are referred from MT-UNet project.
The images and damage level labels in the dataset are credited to PEER Hub ImageNet.