Skip to content

Conversation

@Kaweees
Copy link
Member

@Kaweees Kaweees commented Jan 12, 2026

This merge request utilizes onnxruntime.get_available_providers to fetch ONNX Runtime Execution Providers, which will outperform the hard-coded default CPUExecutionProvider in cases where another Execution Provider exsists on the machine from which dimensional is ran. See https://onnxruntime.ai/docs/execution-providers/ for more information

@Kaweees Kaweees requested a review from a team January 12, 2026 23:52
@Kaweees Kaweees changed the base branch from main to dev January 12, 2026 23:52
@greptile-apps
Copy link

greptile-apps bot commented Jan 12, 2026

Greptile Overview

Greptile Summary

This PR modernizes ONNX Runtime execution provider selection in the MuJoCo policy module by replacing the hardcoded CPUExecutionProvider with dynamic provider detection using ort.get_available_providers().

Key Changes:

  1. Import alias standardization: Changed from rt to ort (line 23), following the widely-adopted community convention
  2. Dynamic provider selection: Replaced providers=["CPUExecutionProvider"] with providers=ort.get_available_providers() (line 43), enabling automatic hardware acceleration
  3. Observability improvement: Added logging statement (line 44) that displays the loaded policy path and the actual execution providers selected by ONNX Runtime

How It Works:
The get_available_providers() function queries the installed ONNX Runtime package and returns a prioritized list of execution providers. For base installations, this returns ["CPUExecutionProvider"] (identical to the previous hardcoded behavior). For GPU-enabled installations (onnxruntime-gpu), it may return ["CUDAExecutionProvider", "CPUExecutionProvider"] or other accelerated providers like TensorrtExecutionProvider. ONNX Runtime's InferenceSession automatically attempts providers in order and falls back gracefully if initialization fails.

Benefits:

  • Performance: Enables automatic CUDA/TensorRT acceleration when available, potentially providing significant speedups for robotics control inference
  • Flexibility: Works correctly across different hardware configurations without code changes
  • Maintainability: Removes hardcoded assumptions and follows ONNX Runtime best practices
  • Transparency: New logging helps diagnose which execution provider is actually being used

Compatibility:
This change is fully backward compatible. On systems without GPU support, the behavior is identical to before (CPU execution). The change aligns with similar patterns in the codebase (e.g., sam_2d_seg.py checks for CUDA availability, and image_embedding.py uses provider lists with fallback logic).

Confidence Score: 5/5

  • This PR is safe to merge with high confidence - it's a low-risk improvement following ONNX Runtime best practices
  • The change is minimal, well-understood, and follows the recommended ONNX Runtime approach. Using get_available_providers() is the standard pattern that enables automatic hardware acceleration while maintaining backward compatibility. The automatic fallback behavior in InferenceSession ensures that if preferred providers fail, CPU execution is used. The addition of logging improves observability without affecting functionality. No breaking changes or new dependencies are introduced.
  • No files require special attention

Important Files Changed

File Analysis

Filename Score Overview
dimos/simulation/mujoco/policy.py 5/5 Import alias modernized to 'ort' and provider selection changed to use dynamic get_available_providers() for automatic hardware acceleration detection, with helpful logging added

Sequence Diagram

sequenceDiagram
    participant Model as load_model()
    participant Controller as OnnxController.__init__()
    participant ORT as onnxruntime
    participant Logger as logger
    participant Hardware as Hardware/Drivers

    Model->>Controller: Initialize with policy_path
    Controller->>ORT: get_available_providers()
    ORT->>Hardware: Check available execution providers
    Hardware-->>ORT: Return ["CUDAExecutionProvider", "CPUExecutionProvider"] or ["CPUExecutionProvider"]
    ORT-->>Controller: Available providers list
    
    Controller->>ORT: InferenceSession(policy_path, providers=available_providers)
    ORT->>Hardware: Attempt to initialize providers in order
    alt CUDA available and functional
        Hardware-->>ORT: CUDAExecutionProvider initialized
    else CUDA unavailable
        Hardware-->>ORT: Fallback to CPUExecutionProvider
    end
    ORT-->>Controller: Session created with active provider
    
    Controller->>ORT: get_providers()
    ORT-->>Controller: Actually used provider(s)
    Controller->>Logger: Log policy path and active providers
    Logger-->>Controller: Logged
    Controller-->>Model: Initialized OnnxController
Loading

Copy link

@greptile-apps greptile-apps bot left a comment

Choose a reason for hiding this comment

The reason will be displayed to describe this comment to others. Learn more.

1 file reviewed, 1 comment

Edit Code Review Agent Settings | Greptile

@Kaweees Kaweees self-assigned this Jan 13, 2026
paul-nechifor
paul-nechifor previously approved these changes Jan 13, 2026
@Kaweees
Copy link
Member Author

Kaweees commented Jan 13, 2026

@greptile

Copy link

@greptile-apps greptile-apps bot left a comment

Choose a reason for hiding this comment

The reason will be displayed to describe this comment to others. Learn more.

No files reviewed, no comments

Edit Code Review Agent Settings | Greptile

Copy link

@greptile-apps greptile-apps bot left a comment

Choose a reason for hiding this comment

The reason will be displayed to describe this comment to others. Learn more.

1 file reviewed, 1 comment

Edit Code Review Agent Settings | Greptile

) -> None:
self._output_names = ["continuous_actions"]
self._policy = rt.InferenceSession(policy_path, providers=["CPUExecutionProvider"])
self._policy = ort.InferenceSession(policy_path, providers=ort.get_available_providers())
Copy link

Choose a reason for hiding this comment

The reason will be displayed to describe this comment to others. Learn more.

Consider adding logging to show which execution provider is actually being used after initialization. This would help with debugging and verifying that GPU acceleration is working when available.

Similar to the pattern used in dimos/agents_deprecated/memory/image_embedding.py:89-91:

self._policy = ort.InferenceSession(policy_path, providers=ort.get_available_providers())
actual_providers = self._policy.get_providers()
logger.info(f"Loaded policy with providers: {actual_providers}")

This is especially useful since get_available_providers() returns all available providers, but the InferenceSession may only successfully initialize with a subset of them.

Note: If this suggestion doesn't match your team's coding style, reply to this and let me know. I'll remember it for next time!

Copy link

@greptile-apps greptile-apps bot left a comment

Choose a reason for hiding this comment

The reason will be displayed to describe this comment to others. Learn more.

1 file reviewed, 1 comment

Edit Code Review Agent Settings | Greptile

) -> None:
self._output_names = ["continuous_actions"]
self._policy = rt.InferenceSession(policy_path, providers=["CPUExecutionProvider"])
self._policy = ort.InferenceSession(policy_path, providers=ort.get_available_providers())
Copy link

Choose a reason for hiding this comment

The reason will be displayed to describe this comment to others. Learn more.

The change from hardcoded ["CPUExecutionProvider"] to dynamic ort.get_available_providers() lacks error handling and observability. While the implementation is technically correct (contrary to the previous review thread), there are important considerations:

Issues:

  1. No logging: Users won't know which execution provider is actually being used, making debugging and performance analysis difficult
  2. No error handling: If provider initialization fails (e.g., CUDA provider available but GPU out of memory), there's no graceful fallback or informative error message
  3. Silent performance changes: Different providers have different performance characteristics and numerical precision - users should be informed which one is selected

Recommendation:
Add logging after initialization to show which provider was selected, similar to the pattern used in dimos/agents_deprecated/memory/image_embedding.py:89-91:

self._policy = ort.InferenceSession(policy_path, providers=ort.get_available_providers())
actual_providers = self._policy.get_providers()
# Log: f"Initialized ONNX policy with providers: {actual_providers}"

This would help users understand which execution provider is being used and debug any performance or behavior differences.

Note: If this suggestion doesn't match your team's coding style, reply to this and let me know. I'll remember it for next time!

@Kaweees
Copy link
Member Author

Kaweees commented Jan 13, 2026

@greptile

Copy link

@greptile-apps greptile-apps bot left a comment

Choose a reason for hiding this comment

The reason will be displayed to describe this comment to others. Learn more.

No files reviewed, no comments

Edit Code Review Agent Settings | Greptile

@Kaweees Kaweees merged commit d945d46 into dev Jan 15, 2026
13 checks passed
@Kaweees Kaweees deleted the dynamic-session-providers branch January 15, 2026 02:36
Nabla7 pushed a commit that referenced this pull request Jan 19, 2026
* refactor(policy): update inference session initialization

* refactor(policy): simplify inference session provider initialization

* Log the policy directory and provider
Nabla7 added a commit that referenced this pull request Jan 20, 2026
* feat(sim): add MJLab G1 velocity policy profile

Introduce a 'mujoco_profile' concept allowing self-contained MuJoCo
simulation bundles (MJCF + ONNX policy + assets) to be loaded by name.

Key changes:
- GlobalConfig: new 'mujoco_profile' field (--mujoco-profile CLI flag)
- model.py: profile-scoped asset loading and bundle.json support
- mujoco_process.py: read camera names from bundle.json per profile
- policy.py: MjlabVelocityOnnxController reads joint_names,
  default_joint_pos, action_scale from ONNX metadata for exact
  MJLab action contract (per-joint scaling & named ordering)
- mujoco_connection.py: skip menagerie download when profile is set
- blueprints.py / rerun_init.py: gate Rerun init on rerun_enabled

Bundle added to data/.lfs/mujoco_sim.tar.gz (LFS-tracked):
  data/mujoco_sim/unitree_g1_mjlab/
    ├── model.xml      (MJLab-compiled G1 MJCF with correct actuators)
    ├── policy.onnx    (trained velocity policy with metadata)
    ├── bundle.json    (camera name mappings)
    └── assets/        (STL meshes from MJLab asset_zoo)

Usage:
  dimos --simulation \
        --mujoco-profile unitree_g1_mjlab \
        run unitree-g1-basic-sim

* CI code cleanup

* fix(sim): resolve meshdir/profile asset conflicts for GO2 and G1

- mujoco_process.py: only use mujoco_profile when explicitly set
  (fixes GO2 accidentally being treated as a profile bundle)
- model.py: rewrite scene XML to remove meshdir/texturedir attrs
  and prefix mesh/texture filenames explicitly, preventing scene
  compiler settings from hijacking robot mesh resolution

* configure unitree go2 mapper to use 10 cm voxels (#1032)

* feat(sim): add MuJoCo subprocess profiler for performance debugging

Adds a built-in timing breakdown for the MuJoCo simulation subprocess.
When enabled, logs rolling averages of time spent in each component:
- physics_ms: mj_step loop
- viewer_sync_ms: MuJoCo viewer synchronization
- rgb_render_ms, depth_render_ms: camera rendering
- pcd_ms: depth-to-pointcloud + voxel downsample
- *_shm_ms: shared memory writes
- ctrl_calls, ctrl_obs_ms, ctrl_onnx_ms: policy cost breakdown

This helps diagnose performance issues (e.g. 'molasses' effect) by
showing exactly where frame time is being spent.

Usage:
  # Standard G1 sim with profiler:
  dimos --simulation --mujoco-profiler --mujoco-profiler-interval-s 2 run unitree-g1-basic-sim

  # MJLab bundle with profiler:
  dimos --simulation --mujoco-profile unitree_g1_mjlab --mujoco-profiler --mujoco-profiler-interval-s 2 run unitree-g1-basic-sim

New GlobalConfig flags:
- mujoco_profiler: bool (default False)
- mujoco_profiler_interval_s: float (default 2.0)

* pre commit

* small docs clarification (#1043)

* Fix split view on wide monitors (#1048)

* fix print to be correct URL based on rerun web or not

* make width of rerun/command center adjustable

* swap sides

* Docs: Install & Develop  (#1022)

* minimal edit

* rice the readme

* grammar

* formatting

* fix examples

* change links to reduce change count

* improve wording

* wording

* remove acknowledgements

* improve the humancli example

* formatting

* Update README.md

* switch to dev branch for development

* changes for paul

* Update README.md

* fix broken link

* update broken link

* Add uv to nix and fix resulting problems (#1021)

* add uv to nix and fix resulting problems

* fix for linux

* v0.0.8 version update (#1050)

* Style changes in docs (#1051)

* capitalization

* punctuation

* more small fixes

* Revert "Add uv to nix and fix resulting problems (#1021)" (#1053)

This reverts commit 8af8f8f.

* Transport benchmarks & Raw ROS transport (#1038)

* raw rospubsub and benchmarks

* typefixes, shm added to the benchmark

* SHM is not so important to tell us every time when it starts

* greptile comments

* Add co-authorship line to commit message filter patterns

* Remove unused contextmanager import

* lcmservice correct kernel settings reintroduced

* mixin mixin resolved

* lcmservice tests fix

* macos lcm rmem fix

* feat: default to rerun-web and auto-open browser on startup (#1019)

- Changed GlobalConfig.viewer_backend default from rerun-native to rerun-web
- WebsocketVisModule now opens dashboard in browser automatically on start
- Requested by Jeff

Co-authored-by: s <pomichterstash@gmail.com>

* chore: fix indentation in blueprints ambiguity check

* CI code cleanup

* use p controller to stop oscillations on unitree go2 (#1014)

* Dynamic session providers for onnxruntime (#983)

* refactor(policy): update inference session initialization

* refactor(policy): simplify inference session provider initialization

* Log the policy directory and provider

* Perception Full Refactor and Cleanup, deprecated Manipulation AIO Pipeline and replaced with Object Scene Registration  (#936)

* added rate limiting and backpressure to pointcloud publishing

CI code cleanup

updated ZED module to the same standard as realsense

CI code cleanup

fixed stash's comments

CI code cleanup

mypy fixes + comments

removed property of camera_info

should pass CI now

added detection3d pointcloud types from depth image

added yoloe support and 3D object segmentation

CI code cleanup

use yoloe-s instead for nuc

CI code cleanup

removed deprecated perception code

some pointcloud color changes

major refactor and added object class for object scene registration

CI code cleanup

refactored, added objectDB for persistent object memory

CI code cleanup

made objectDB a normal class instead of a module

CI code cleanup

revert to dev

reverted more files

CI code cleanup

completely refactored object scene registration to work natively in dimos instead of using ROS as transport. Made everything super clean and working

CI code cleanup

bug fixed + use yoloe-l by default

added yolo object exlusion list

CI code cleanup

added zed camera to the object registration demo

CI code cleanup

added image and pointclou2 fixes and as_numpy function

working promptable object scene registration

CI code cleanup

bug fixes

bug fix + remove ros imports

should not fail CI now

CI code cleanup

more CI fixes, somehow local CI did not catch

changed prompt fixed bug

CI code cleanup

reverted some changes

Cleanup very dead code and fixed mypy errors

CI code cleanup

fixed more mypy

CI code cleanup

* one last mypy fix

* added default to imagedetection2d to not set off mypy

* fixed bug and default to open vocab for detection

* mypy fixes

* fixed one last mypy error

* fixed all of Stash's comments

* should pass mypy now

* added uv lock

* sync uv.lock with dev

* fixed the last mypy error

* fixed mypy errors from source

* reverted mypy import error fixes

* fixed Ivan's comment

* fixed last of ivan's comment

* remove all to_ros_msgs stuff in this commit

* passed Ivan's detector tests

* added README for depth camera integration

* fixed last of Stash's comments

* feat(cli): type-free topic echo via /topic#pkg.Msg inference, this mi… (#988)

* feat(cli): type-free topic echo via /topic#pkg.Msg inference, this mirrors ros topic echo functionality.

- Make type_name optional in 'dimos topic echo'
- Infer message type from LCM channel suffix (e.g. /odom#nav_msgs.Odometry)
- Dynamically import dimos.msgs.<pkg> and call cls.lcm_decode(data)
- Keep existing explicit-type mode working
- Update transports.md docs

* fix(cli): use LCMPubSubBase instead of raw lcm.LCM for topic echo, my bad

* verify blueprints (#1018)

* verify blueprints

* Fix geometry msgs check failure in CI

---------

Co-authored-by: stash <pomichterstash@gmail.com>

* Experimental Streamed Temporal Memory with SpatioTemporal & Entity based RAG (#973)

* temporal memory + vlm agent + blueprints

* fixing module issue and style

* fix skill registration

* removing state functions unpickable

* inheritancefixes and memory management

* docstring for query

* microcommit: fixing memory buffer

* sharpness filter and simplified frame filtering

* CI code cleanup

* initial graph database implementation

* db implementation, working and stylized, best reply is unitree_go2_office_walk2

* type checking issues

* final edits, move into experimental, revert non-memory code edits, typechecking

* persistent db flag enabled in config

* Fix test to not run in CI due to LFS pull

* Fix CLIP filter to use dimensional clip

* Add path to temporal memory

* revert video operators

* Revert moondream

* added temporal memory docs

* Refactor move to /experimental/temporal_memory

---------

Co-authored-by: Paul Nechifor <paul@nechifor.net>
Co-authored-by: Stash Pomichter <pomichterstash@gmail.com>
Co-authored-by: shreyasrajesh0308 <shreyasrajesh0308@users.noreply.github.com>
Co-authored-by: spomichter <12108168+spomichter@users.noreply.github.com>

* Control Orchestrator - Unified Controller for multi-arm and full body controller (#970)

* archive old driver to manipulators_old for redesign

* spec.py defining minimal protocol for an arm driver

* xarm driver driver added - driver owns control thread and robot state threads also invokes rpc calls to arm specific SDK backends

* xarm SDK specific wrapper to interface with dimos RPC calls from the driver

* removed type checking for  old armdriver spec from the cartesian controller

* replicated piper driver to meet the new architecture

* added mock backend

* updated all blueprints to add new arm module

* Added readme explaining new driver architecture overview

* config now parsed in backend init instead of connect method

* addded dual arm control blueprint using trajectory controller

* adding a control orchestrator for single control loop for multiple arms and joint control -  added dataclasses for orchestrator and protocol for ControlTask

* hardware interface protocol that wraps specific arm SDK to work with orchestrator. Also solves namespace for multiple arm and hardware

* main orchestrator module and control loop that claims resources computes next commands,  and arbitrates priority of different tasks and controllers

* added a trajectory task implementation that performs trajecotry control

* added blueprints to launch orchestratory module with differnt arms for testing

* updated blueprints to add piper + xarm blueprint

* orchestrator client that can send tasks to the control orchestrator module

* added a readme

* added pytest and e2e test

* Update dimos/control/hardware_interface.py

explicit false added to the Torque Mode command sent, to avoid silent failing scenario

* CI code cleanup

* Fixed issues flagged by greptile

Mode conflict detection in routing: Added check in _route_to_hardware
Preemption tracking: Changed structure to {preempted_task: {joint: winning_task}}
Mode conflict preemption: Tasks dropped due to mode conflict at same priority
Trajectory completion edge case: Returns final position instead of None on completion
Dead code removal: and Piper backend cleanup

* Renamed deprecated old manipuialtion test file and Mypy type fixes

* fix mypy test

* mypy test fix added explicit  type

* Remove deprecated manipulators_old folder

* fixed redef error in dual trajectory setter

* Fixed bugs identified by greptile overview:
1. tick_loop.py - Race condition in _route_to_hardware
2. orchestrator.py  -  Added hardware_added tracking list and rollback in outer except block
3. hardware_interface.py - Added disconnect() to both HardwareInterface protocol and BackendHardwareInterface
4. Added disconnect() to both HardwareInterface protocol and BackendHardwareInterface
5. orchestrator.py - Start order fix Moved super().start() to end, after tick loop starts successfully
6. trajectory_task.py - Added Empty joint_names validation

* addressed greptile suggestion:
hardware_interface.py - Torque mode logging fix
orchestrator.py - Fail hardware removal if joints in use
tick_loop.py - Rate control drift fix

* undo change to pyproject.toml

* Replaced _running bool with threading.Event (_stop_event) for thread safety
Removed duplicate _auto_start() call from __init__ - connection now only happens in start()
orchestrator_client.py	IPython conversion

* added type ignore for ipythin

* removed check for has attribute in hardware interface

Moved super.start() at the beginning

replaced running bool with stop_event in tick_loop to improve thread safety

removed default ip from init

removed simple dataclasses test

* orchestrator.py: Use match statement for backend factory, restructure backend cleanup
task.py: Use match statement in get_values()
tick_loop.py: Add JointWinner NamedTuple for cleaner arbitration logic
xarm/backend.py: Extract unit conversions into static helper methods

* tick_loop.py: Notify preemption when lower-priority task loses to existing winner
hardware_interface.py: Call set_control_mode() before mode-specific writes, Convert if/elif to match statement for control mode dispatch

* tick_loop.py: Notify preemption when lower-priority task loses to existing winner
hardware_interface.py: Call set_control_mode() before mode-specific writes, convert if/elif to match statement
trajectory_task.py: Defer start time to first compute() for consistent timing
orchestrator.py: Extract _setup_hardware() helper for cleaner config setup
piper and xarm/backend.py: Fail fast on read_joint_positions(), map SERVO_POSITION to mode 1
hardware_interface.py: Retry initialization with proper error propagation,
spec.py: Add SERVO_POSITION control mode for confusion between position planning and position servo
task.py: added SERVO_POSITION to JointCommandOutput helper

* cleaned up legacy blueprints for manipulator drivers

* enforce ManipulatorBackend Protocol on the backend.py

* feat: add runtime protocol checks for manipulator backends

* added runtime checking for controlTask protocol

* Add TaskStatus dataclass, refactor get_trajectory_status and Explicitly inherit from ControlTask protocol

---------

Co-authored-by: stash <pomichterstash@gmail.com>

* configure unitree go2 mapper to use 10 cm voxels (#1032)

* Create DDSPubSubBase, DDSTopic

* Create PickleDDS

* Fix hash/equality inconsistency in DDSTopic

* Add DDSMsg

* Create DDSTransport

* Add broadcast and subscribe methods to DDSTransport

* Create DDSService

* Add CycloneDDS package

* Remove unnecessary attributes

* Add threading and serialization methods to DDSService

* Ensure broadcast and subscribe methods initialize DDS if not started

* Add Transport benchmarking capabilities to CycloneDDS (#1055)

* raw rospubsub and benchmarks

* typefixes, shm added to the benchmark

* SHM is not so important to tell us every time when it starts

* greptile comments

* Add co-authorship line to commit message filter patterns

* Remove unused contextmanager import

---------

Co-authored-by: Ivan Nikolic <lesh@sysphere.org>

* Fix DDS segmentation fault using bytearray for binary data storage

Replace base64 string encoding with native IDL bytearray type to eliminate
buffer overflow issues. The original base64 encoding exceeded CycloneDDS's
default string size limit (~256 bytes) and caused crashes on messages >= 1KB.

Key changes:
- Use make_idl_struct with bytearray field instead of string
- Convert bytes to bytearray when publishing to DDS
- Convert bytearray back to bytes when receiving from DDS
- Add _DDSMessageListener for async message dispatch
- Implement thread-safe DataWriter/DataReader management
- Add pickle support via __getstate__/__setstate__

Result: All 12 DDS benchmark tests pass (64B to 10MB messages).

* Refactor DDS PubSub implementation to use CycloneDDS Topic

* Remove DDS pickling

* CI code cleanup

* bugfix

* CI code cleanup

---------

Co-authored-by: leshy <lesh@sysphere.org>
Co-authored-by: Jeff Hykin <jeff.hykin@gmail.com>
Co-authored-by: Paul Nechifor <paul@nechifor.net>
Co-authored-by: s <pomichterstash@gmail.com>
Co-authored-by: Miguel Villa Floran <miguel.villafloran@gmail.com>
Co-authored-by: alexlin2 <44330195+alexlin2@users.noreply.github.com>
Co-authored-by: claire wang <clara32356@gmail.com>
Co-authored-by: shreyasrajesh0308 <shreyasrajesh0308@users.noreply.github.com>
Co-authored-by: spomichter <12108168+spomichter@users.noreply.github.com>
Co-authored-by: Mustafa Bhadsorawala <39084056+mustafab0@users.noreply.github.com>
Sign up for free to join this conversation on GitHub. Already have an account? Sign in to comment

Labels

None yet

Projects

None yet

Development

Successfully merging this pull request may close these issues.

3 participants