Skip to content
Merged
Show file tree
Hide file tree
Changes from all commits
Commits
File filter

Filter by extension

Filter by extension

Conversations
Failed to load comments.
Loading
Jump to
Jump to file
Failed to load files.
Loading
Diff view
Diff view
4 changes: 2 additions & 2 deletions README.md
Original file line number Diff line number Diff line change
Expand Up @@ -44,7 +44,7 @@ data-sampling approaches, typical RL terms and a benchmarking environment. Curre
This extension lets you load realistic terrains complete with rich semantic annotations, run fast traversability analysis, and render large batches of multi-modal data. It exposes three core modules:

- **Environment Importer** – load Matterport, Unreal/Carla, generated or USD terrains and expose all geometric / semantic domains → [Details](exts/nav_suite/docs/README.md#environment-importer)
- **Data Collectors** – sample trajectories, viewpoints and render multi-modal data from any imported world → [Details](exts/nav_suite/docs/README.md#data-collectors)
- **Data Collectors** – sample trajectories and render multi-modal sensor data from any imported world → [Details](exts/nav_suite/docs/README.md#data-collectors)
- **Terrain Analysis** – build traversability height-maps and graphs for path planning and curriculum tasks → [Details](exts/nav_suite/docs/README.md#traversabilty-analysis)

## `nav_tasks` Extension
Expand Down Expand Up @@ -177,7 +177,7 @@ Here we provide a set of examples that demonstrate how to use the different part
- [Import the Nvidia Warehouse Environment](scripts/nav_suite/terrains/warehouse_import.py)
- ``collector``
- [Sample Trajectories from Matterport](scripts/nav_suite/collector/matterport_trajectory_sampling.py)
- [Sample Viewpoints and Render Images from Carla (Unreal Engine)](scripts/nav_suite/collector/carla_viewpoint_sampling.py)
- [Sample Camera Viewpoints and Render Images from Carla (Unreal Engine)](scripts/nav_suite/collector/carla_sensor_data_sampling.py)

## Citing

Expand Down
2 changes: 1 addition & 1 deletion exts/nav_suite/config/extension.toml
Original file line number Diff line number Diff line change
@@ -1,7 +1,7 @@
[package]

# Note: Semantic Versioning is used: https://semver.org/
version = "0.2.4"
version = "0.2.5"

# Description
title = "IsaacLab Navigation Suite"
Expand Down
21 changes: 21 additions & 0 deletions exts/nav_suite/docs/CHANGELOG.rst
Original file line number Diff line number Diff line change
@@ -1,6 +1,27 @@
Changelog
---------


0.2.5 (2025-08-13)
~~~~~~~~~~~~~~~~~~

Added
^^^^^

- Added sampling support to other sensor data (such as RayCasters) to :class:`nav_suite.collectors.SensorDataSampling`
for sampling sensor data from the environment
- RayCaster implementation in :class:`nav_suite.collectors.sensors.RayCasterSensor`

Changed
^^^^^^^

- Renamed :class:`nav_suite.collectors.ViewpointSampling` to :class:`nav_suite.collectors.SensorSampling` and
:class:`nav_suite.collectors.ViewpointSamplingCfg` to :class:`nav_suite.collectors.SensorSamplingCfg`
- Sensor data extraction is now done in individual classes for each sensor type. The logic for camera data prev.
included in :class:`nav_suite.collectors.ViewpointSampling` is now extracted in
:class:`nav_suite.collectors.sensors.CameraSensor`.


0.2.4 (2025-08-08)
~~~~~~~~~~~~~~~~~~

Expand Down
24 changes: 5 additions & 19 deletions exts/nav_suite/docs/README.md
Original file line number Diff line number Diff line change
Expand Up @@ -92,35 +92,21 @@ This extensions allows to collect data from the previously loaded environments a

The trajectory sampling can be executed multiple times with different number of sampled trajectories as well as different minimum and maximum lengths.

- `Viewpoint sampling and image rendering`:
- `Sensor data sampling and rendering`:

For the viewpoint sampling the same terrain analysis as for the trajectory sampling is executed. The graph and traversability parameters are defined in corresponding [config file](../nav_suite/terrain_analysis/terrain_analysis_cfg.py).
For the sensor data sampling the same terrain analysis as for the trajectory sampling is executed. The graph and traversability parameters are defined in corresponding [config file](../nav_suite/terrain_analysis/terrain_analysis_cfg.py).

**Important** for the analysis also regarding the semantic domain, a semantic class to cost mapping has to be defined in the config. Per default, an example cost map for ``matterport`` environments is selected.

Each node of the prah is a possible viewpoint, with the orientation uniformly sampled between variable bounds. The exact parameters of the sampling can be defined [here](../nav_suite/collectors/viewpoint_sampling_cfg.py). You can define the ``module`` and the ``class`` of the parameters config that is used for the sampling. An example is provided that is optimized for the legged robot ANYmal and a matterport environment. Please not that this configuration assumes that two cameras are added where the first one has access to semantic information and the second to geoemtric information.

The number of viepoints that are sampled can be directory defined in the GUI. With the button ``Viewpoint Sampling`` the viewpoints are saved as ``camera_poses`` under the defined directory. Afterwards, click ``Viewpoint Renedering`` to get the final rendered images. The resulting folder structure is as follows:

``` graphql
cfg.data_dir
├── camera_poses.txt # format: x y z qw qx qy qz
├── cfg.depth_cam_name # required
| ├── intrinsics.txt # K-matrix (3x3)
| ├── distance_to_image_plane # annotator
| | ├── xxxx.png # images saved with 4 digits, e.g. 0000.png
├── cfg.depth_cam_name # optional
| ├── intrinsics.txt # K-matrix (3x3)
| ├── distance_to_image_plane # annotator
| | ├── xxxx.png # images saved with 4 digits, e.g. 0000.png
```
Each node of the graph is a possible data sampling point, with the orientation uniformly sampled between variable bounds. The exact parameters of the sampling can be defined [here](../nav_suite/collectors/sensor_data_sampling_cfg.py). How the individual sensor data is treated is defined in individual sensor modules, i.e., [Camera](../nav_suite/collectors/sensors/camera_cfg.py), [RayCaster](../nav_suite/collectors/sensors/raycaster_cfg.py).
An example is provided that is optimized for the legged robot ANYmal and a matterport environment. Please note that this configuration assumes that two cameras are added where the first one has access to semantic information and the second to geometric information.

### Standalone scripts

Standalone scripts are provided to demonstrate the loading of different environments:

- [Sample Trajectories from Matterport](../../../scripts/nav_suite/collector/matterport_trajectory_sampling.py)
- [Sample Viewpoints and Render Images from Carla (Unreal Engine)](../../../scripts/nav_suite/collector/carla_viewpoint_sampling.py)
- [Sample Camera Viewpoints and Render Images from Carla (Unreal Engine)](../../../scripts/nav_suite/collector/carla_sensor_data_sampling.py)


> [!NOTE] **Matterport Sensors**: \
Expand Down
5 changes: 3 additions & 2 deletions exts/nav_suite/nav_suite/collectors/__init__.py
Original file line number Diff line number Diff line change
Expand Up @@ -3,7 +3,8 @@
#
# SPDX-License-Identifier: Apache-2.0

from .sensor_data_sampling import SensorDataSampling
from .sensor_data_sampling_cfg import SensorDataSamplingCfg
from .sensors import * # noqa: F401, F403
from .trajectory_sampling import TrajectorySampling
from .trajectory_sampling_cfg import TrajectorySamplingCfg
from .viewpoint_sampling import ViewpointSampling
from .viewpoint_sampling_cfg import ViewpointSamplingCfg
Loading
Loading