Project webpage here
⚠️ PerceptionMetrics was previously known as DetectionMetrics. The original website referenced in our Sensors paper is still available here
PerceptionMetrics is a toolkit designed to unify and streamline the evaluation of object detection and segmentation models across different sensor modalities, frameworks, and datasets. It offers multiple interfaces including a GUI for interactive analysis, a CLI for batch evaluation, and a Python library for seamless integration into your codebase. The toolkit provides consistent abstractions for models, datasets, and metrics, enabling fair, reproducible comparisons across heterogeneous perception systems.
| 💻 Code | 🔧 Installation | 🧩 Compatibility | 📖 Docs | 💻 GUI |
|---|
| Task | Modality | Datasets | Framework |
|---|---|---|---|
| Segmentation | Image | RELLIS-3D, GOOSE, RUGD, WildScenes, custom GAIA format | PyTorch, Tensorflow |
| LiDAR | RELLIS-3D, GOOSE, WildScenes, custom GAIA format | PyTorch (tested with Open3D-ML, mmdetection3d, SphereFormer, and LSK3DNet models) | |
| Object detection | Image | COCO, YOLO | PyTorch (tested with torchvision and torchscript-exported YOLO models) |
More details about the specific metrics and input/output formats required fow each framework are provided in the Compatibility section in our webpage.
In the near future, PerceptionMetrics is planned to be deployed in PyPI. In the meantime, you can clone our repo and build the package locally using either venv or Poetry.
Create your virtual environment:
python3 -m venv .venv
Activate your environment and install as pip package:
source .venv/bin/activate
pip install -e .
Install Poetry (if not done before):
python3 -m pip install --user pipx
pipx install poetry
Install dependencies and activate poetry environment (you can get out of the Poetry shell by running exit):
poetry install
eval $(poetry env activate)
Install your deep learning framework of preference in your environment. We have tested:
- CUDA Version:
12.6 torch==2.4.1andtorchvision==0.19.1.torch==2.2.2andtorchvision==0.17.2.tensorflow==2.17.1tensorflow==2.16.1
If you are using LiDAR, Open3D currently requires torch==2.2*.
Some LiDAR segmentation models, such as SphereFormer and LSK3DNet, require a dedicated installation workflow. Refer to additional_envs/INSTRUCTIONS.md for detailed setup instructions.
PerceptionMetrics can be used in three ways: through the interactive GUI (detection only), as a Python library, or via the command-line interface (segmentation and detection).
The easiest way to get started with PerceptionMetrics is through the GUI (detection tasks only):
# From the project root directory
streamlit run app.pyThe GUI provides:
- Dataset Viewer: Browse and visualize your datasets
- Inference: Run real-time inference on images
- Evaluator: Perform comprehensive model evaluation
For detailed GUI documentation, see our GUI guide.
🧑🏫️ Image Segmentation Tutorial
🧑🏫️ Image Detection Tutorial (YOLO)
You can check the examples directory for further inspiration. If you are using poetry, you can run the scripts provided either by activating the created environment using poetry shell or directly running poetry run python examples/<some_python_script.py>.
PerceptionMetrics provides a CLI with two commands, pm_evaluate and pm_batch. Thanks to the configuration in the pyproject.toml file, we can simply run poetry install from the root directory and use them without explicitly invoking the Python files. More details are provided in PerceptionMetrics website.
Segmentation:
pm_evaluate segmentation image --model_format torch --model /path/to/model.pt --model_ontology /path/to/ontology.json --model_cfg /path/to/cfg.json --dataset_format rellis3d --dataset_dir /path/to/dataset --dataset_ontology /path/to/ontology.json --out_fname /path/to/results.csvDetection:
pm_evaluate detection image --model_format torch --model /path/to/model.pt --model_ontology /path/to/ontology.json --model_cfg /path/to/cfg.json --dataset_format coco --dataset_dir /path/to/coco/dataset --out_fname /path/to/results.csvOur previous release, DetectionMetrics, introduced a versatile suite focused on object detection, supporting cross-framework evaluation and analysis. Cite our work if you use it in your research!
| 💻 Code | 📖 Docs | 🐋 Docker | 📰 Paper |
|---|
@article{PaniegoOSAssessment2022,
author = {Paniego, Sergio and Sharma, Vinay and Cañas, José María},
title = {Open Source Assessment of Deep Learning Visual Object Detection},
journal = {Sensors},
volume = {22},
year = {2022},
number = {12},
article-number = {4575},
url = {https://www.mdpi.com/1424-8220/22/12/4575},
pubmedid = {35746357},
issn = {1424-8220},
doi = {10.3390/s22124575},
}
To make your first contribution, follow this Guide.
LiDAR segmentation support is built upon open-source work from Open3D-ML, mmdetection3d, SphereFormer, and LSK3DNet.

