This work is going to appear in Information Fusion. We introduce PointExplainer, an interpretable diagnostic framework designed to enhance clinical interpretability and support the early diagnosis of Parkinson’s disease.
PointExplainer assigns attribution scores to local segments of a handwriting trajectory, highlighting their relative contribution to the model’s decision. This explanation format, consistent with expert reasoning patterns, enables clinicians to quickly identify key regions and understand the model’s diagnostic logic. In addition, we design consistency metrics to quantitatively assess the faithfulness of the explanations, reducing reliance on subjective evaluation.
In this repository, we release code for our PointExplainer diagnosis and explanation networks as well as a few utility scripts for training, testing and data processing and visualization on the public datasets.
If you find our work useful in your research, please consider citing:
@article{WANG2026104064,
title = {PointExplainer: Towards transparent Parkinson’s disease diagnosis},
journal = {Information Fusion},
volume = {129},
pages = {104064},
year = {2026},
issn = {1566-2535},
doi = {https://doi.org/10.1016/j.inffus.2025.104064},
url = {https://www.sciencedirect.com/science/article/pii/S1566253525011261}
}Install the required dependencies. The project requires python=3.8 and has been tested with pytorch=2.2.1, torchvision=0.17.1, and PyQt5=5.15.10. Please follow the official instructions here to install PyTorch and TorchVision. Installing them with CUDA support is strongly recommended.
pip install -r requirements.txt
Download the dataset from here. The dataset contains two handwriting patterns, SST (Static Spiral Test) and DST (Dynamic Spiral Test), used for acquiring digitized Archimedean spiral drawings. After downloading, organize the dataset into the following directory structure:
data/
└── ParkinsonHW/
└── raw_data/
├── KT/ # healthy control subjects
└── PD/ # Parkinson’s disease patients
Run the following scripts in order to complete the preprocessing pipeline:
# Step I: Stratified cross-validation split at the subject level
python preprocess/kfold_split.py
# Step II: Data processing (point cloud construction, etc.)
python preprocess/data_preprocess.py
# Step III: Sliding-window segmentation of handwriting trajectories
python preprocess/segment_patches.py
# Step IV: Split the training and validation sets
python preprocess/split_train_val.py
To train the classification model, run:
python train.py
All log files and model checkpoints will be saved automatically to the log_dir directory by default. You can use TensorBoard to visualize the model architecture and monitor training progress:
tensorboard --logdir=log_dir
After training, you can evaluate the model and generate visualizations of key performance metrics by running:
python test.py
Finally, we trained a dedicated interpreter for each subject and performed perturbation analysis to verify the reliability of the interpretation results.
python explanation.py
Our code is released under MIT License (see LICENSE file for details).

