LmPT (LandmarkPointTransformer) is a transformer-based framework built upon the Pointcept codebase, extending it with engines for anatomical landmark detection.
This repository includes the methods as introduced in LmPT: Conditional Point Transformer for Anatomical Landmark Detection on 3D Point Clouds → [ arXiv ].
Available for download via KeyboneNetCross.
Place the content under the data directory.
This dataset, introduced in this work, includes 14 models of dog femurs from different breeds and sizes (7 left, 7 right) under FBD.
With mesh and pcds representations, each model includes 11 anatomical landmark annotations.
This dataset includes 20 models of human femurs from different subjects (10 left, 10 right) under FBH.
With mesh and pcds representations, each model includes 22 anatomical landmark annotations.
The representations are derived from the VSDFullBodyBoneModels dataset by RWTHmediTEC.
A pre-trained, cross-species LmPT-v2 model is available for download via LmPT-v2.
Place the content under the exp/keybonenetcross directory.
Train from scratch using a configuration file from configs, which will create an experiment folder in exp with training outputs.
sh scripts/train.sh -p ${INTERPRETER_PATH} -g ${NUM_GPU} -d ${DATASET_NAME} -c ${CONFIG_NAME} -n ${EXP_NAME}
For example:
sh scripts/train.sh -p python -g 1 -d keybonenetcross -c lfv2 -n scratchTest a model using the experiment name and corresponding config from a trained checkpoint.
sh scripts/test.sh -p ${INTERPRETER_PATH} -g ${NUM_GPU} -d ${DATASET_NAME} -n ${EXP_NAME} -w ${CHECKPOINT_NAME}
For example, to test the pre-trained LmPT-v2 model:
sh scripts/test.sh -p python -g 1 -d keybonenetcross -n lfv_cross -w model_bestIf you find LmPT useful to your research, please consider citing:
@misc{bastico2026lmptconditionalpointtransformer,
title={LmPT: Conditional Point Transformer for Anatomical Landmark Detection on 3D Point Clouds},
author={Matteo Bastico and Pierre Onghena and David Ryckelynck and Beatriz Marcotegui and Santiago Velasco-Forero and Laurent Corté and Caroline Robine--Decourcelle and Etienne Decencière},
year={2026},
eprint={2602.02808},
archivePrefix={arXiv},
primaryClass={cs.CV},
url={https://arxiv.org/abs/2602.02808},
}