Chalmers University of Technology; Linköping University; University of Amsterdam; Lund University
David Nordström*, Johan Edstedt*, Georg Bökman, Jonathan Astermark, Anders Heyden, Viktor Larsson, Mårten Wadenbäck, Michael Felsberg, Fredrik Kahl
Performance on a difficult matching pair compared to LightGlue.
LoMa is a fast and accurate family of local feature matchers. It works similar to LightGlue but significantly improves matching robustness and accuracy across benchmarks, even outperforming RoMa and RoMa v2 on the difficult WxBS benchmark. As LoMa leverages local keypoint descriptions, the models are perfect drop-in replacement in e.g. SfM and Visual Localization pipelines.
- [April 14, 2026] Rotation invariant LoMa released. The model, which we call LoMa-R, is great at aerial imagery (e.g. SatAst). See the paper Who Handles Orientation? (CVPRW26) for more information.
- [April 13, 2026] Integration available with HLoc and vismatch.
- [April 6, 2026] LoMa inference code released.
import cv2
from loma import LoMa, LoMaB
# load pretrained model
model = LoMa(LoMaB()) # also available: LoMaB128, LoMaL, LoMaG, LoMaR
# Define image paths, e.g.
img_A_path, img_B_path = "assets/0015_A.jpg", "assets/0015_B.jpg"
# Extract matching keypoints in image coordinates
kptsA, kptsB = model.match(img_A_path, img_B_path)
# Find a fundamental matrix (or anything else of interest)
F, mask = cv2.findFundamentalMat(
kptsA, kptsB, ransacReprojThreshold=0.2, method=cv2.USAC_MAGSAC, confidence=0.999999, maxIters=10000
)We provide additional code examples in demo.py, which might help in understanding. To run the demo, use the following API:
uv run demo.py matcher:loma-bIn your python environment (tested on Linux python 3.12), run:
uv pip install -e .or
uv syncWe initially provide code for evaluating on MegaDepth, ScanNet, WxBS and RUBIK. If you do not already have MegaDepth1500 and ScanNet1500, you may run the following to download them:
source scripts/eval_prep.shTo run a benchmark you need to install the optional dependencies by e.g. uv sync --extra eval. Thereafter, you can use the following call signature:
uv run eval.py matcher:loma-b --benchmark wxbsUse uv run eval.py --help to explore the different options.
The results are similar to those reported in the paper. For example, running the evaluation for LoMa-B on WxBS gives us mAA_10px: 0.6876.
We an array of models: LoMA-{B, B128, L, G, R}. For most usecases LoMa-B, which is the same size as LightGlue, works fine. LoMa-G is significantly heavier but gives the most accurate matches, even surpassing the RoMa-family on e.g. WxBS and IMC22. LoMa-R provides a rotation invariant matcher and descriptor (through data augmentation).
- Publish the inference code.
- Release rotation invariant matcher.
- Integrate with HLoc. See this fork.
- Integrate with vismatch. See this PR.
- Release a lightweight descriptor.
- Provide training code.
- Release HardMatch.
All our code except the matcher, which inherits its license from LightGlue, is MIT license. LightGlue has an Apache-2.0 license.
Thanks to Parskatt for writing most of the code. Our codebase structure is mainly based on RoMaV2 and our architectures build on LightGlue, DeDoDe, and DaD.
If you find our models useful, please consider citing our papers!
@misc{nordström2026lomalocalfeaturematching,
title={LoMa: Local Feature Matching Revisited},
author={David Nordström and Johan Edstedt and Georg Bökman and Jonathan Astermark and Anders Heyden and Viktor Larsson and Mårten Wadenbäck and Michael Felsberg and Fredrik Kahl},
year={2026},
eprint={2604.04931},
archivePrefix={arXiv},
primaryClass={cs.CV},
url={https://arxiv.org/abs/2604.04931},
}
@inproceedings{nordstrom2026who,
title={Who Handles Orientation? Investigating Invariance in Feature Matching},
author={David Nordström and Johan Edstedt and Georg Bökman and Fredrik Kahl},
booktitle={Proceedings of the IEEE/CVF Conference on Computer Vision and Pattern Recognition (CVPR) Workshops},
year={2026}
}