Skip to content

Releases: fabio-sim/LightGlue-ONNX

v1.0.0: Fused LightGlue-ONNX

03 Oct 17:54
3140781
Compare
Choose a tag to compare

Fused LightGlue ONNX Models

This release provides optimized LightGlue ONNX models that have been fused (multi-head attention fusion). MultiHeadAttention is a contrib operator which is intended to be run using ONNXRuntime.

All models are dynamic and have undergone symbolic shape inference, but for TensorRT via ORT, it is recommended to use superpoint_lightglue_fused_fp16.onnx. For pure TensorRT, please use superpoint_lightglue.trt.onnx as the source model for building the engine.

For reproducibility, the environment in which these models were exported is also provided (pip-freeze.txt). All models were exported with ONNX opset version 17.

v0.1.3: TensorRT-compatible LightGlue-ONNX

20 Jul 11:46
e82a1a4
Compare
Choose a tag to compare

TensorRT-compatible LightGlue-ONNX Models

This release provides exported ONNX LightGlue models that have undergone shape inference for compatibility with ONNXRuntime's TensorRT Execution Provider. Only the SuperPoint feature extractor is supported. It is recommended to pass the min-opt-max shape range options to the execution provider (see EVALUATION.md for an example).

v0.1.2: LightGlue-ONNX-MP-Flash

13 Jul 13:38
1735313
Compare
Choose a tag to compare

LightGlue-ONNX Flash Attention Models

This release provides exported ONNX LightGlue models with Flash Attention enabled, in both full- (*_flash.onnx) and mixed-precision (*_mp_flash.onnx). Both standalone models and end-to-end pipelines (*_end2end_*.onnx) are provided. Mixed-precision combined with Flash Attention produces the fastest inference times. Please refer to EVALUATION.md for detailed speed comparisons.

All models are exported with flash-attn==1.0.8. Note that flash-attn does NOT need to be installed for inference.

v0.1.1: LightGlue-ONNX-MP

11 Jul 15:36
75ec0e8
Compare
Choose a tag to compare

LightGlue-ONNX Mixed Precision Models

This release provides mixed-precision-exported ONNX models of the DISK, SuperPoint (feature extractors), and LightGlue (keypoint matcher) models (*_mp.onnx). Both standalone models and end-to-end pipelines (*_end2end_mp.onnx) are provided. Mixed precision is generally faster than full precision but is only supported on CUDA. Please refer to EVALUATION.md for detailed speed comparisons.

v0.1.0: LightGlue-ONNX

04 Jul 11:31
1da3ab3
Compare
Choose a tag to compare

LightGlue-ONNX Models

This release provides exported ONNX models of the DISK, SuperPoint (feature extractors), and LightGlue (keypoint matcher) models. Both standalone models and end-to-end pipelines (*_end2end.onnx) are provided.

Individual Models

  • disk.onnx: DISK feature extractor.
  • disk_{N}.onnx: DISK feature extractor with max_num_keypoints=N.
  • disk_lightglue.onnx: LightGlue model trained on DISK features.
  • superpoint.onnx: SuperPoint feature extractor.
  • superpoint_{N}.onnx: SuperPoint feature extractor with max_num_keypoints=N.
  • superpoint_lightglue.onnx: LightGlue model trained on SuperPoint features.

End-to-end Pipelines

  • disk_lightglue_end2end.onnx: LightGlue model fused to DISK.
  • disk_{N}_lightglue_end2end.onnx: LightGlue model fused to DISK with max_num_keypoints=N.
  • superpoint_lightglue_end2end.onnx: LightGlue model fused to SuperPoint.
  • superpoint_{N}_lightglue_end2end.onnx: LightGlue model fused to SuperPoint with max_num_keypoints=N.

All models are dynamic with respect to image size. Note that a model exported with static input shapes may perform faster due to ONNX optimisations.

For reproducibility, the environment in which these models were exported is also provided (pip-freeze.txt). All models were exported with ONNX opset version 16.