Skip to content

onnxruntime with CUDAExecutionProvider crashes: gather_nd.cc:30 CheckBatchDimensionsMatch Batch dimensions differ at index 0: 1 != 3, tensor indices: 0, 1 #25053

Open
@coffezhou

Description

@coffezhou

Describe the issue

For the following onnx model,

Image
onnxruntime with the CPUExecutionProvider can run this model correctly, the results are as follows:

ONNXRuntime:
 [array([[0.62514794],
       [0.06079907],
       [1.        ]], dtype=float32)]

However, when I run this model with CUDAExecutionProvider, onnxruntime crashes:

File "/home/carla/anaconda3/envs/onnruntime-gpu/lib/python3.12/site-packages/onnxruntime/capi/onnxruntime_inference_collection.py", line 273, in run
    return self._sess.run(output_names, input_feed, run_options)
           ^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^
onnxruntime.capi.onnxruntime_pybind11_state.Fail: [ONNXRuntimeError] : 1 : FAIL : Non-zero status code returned while running GatherND node. Name:'GatherND' Status Message: gather_nd.cc:30 CheckBatchDimensionsMatch Batch dimensions differ at index 0: 1 != 3, tensor indices: 0, 1

To reproduce

Environment

OS: Ubuntu 20.04
onnxruntime: 1.23.0.dev20250515001
CUDA: cuda-12.2.2::cuda-toolkit
CUDNN: 9.1.1.17
NVIDIA GPU: GeForce RTX 3080
NVIDIA Driver Version: 535.183.01
Python Version: 3.12.9

Steps to reproduce

This bug can be reproduced by the following code with the model in the attachment.

from typing import Dict, List, Literal, Optional
import sys
import os

import numpy as np
import onnx
import onnxruntime
from onnx import ModelProto, TensorProto, helper, mapping

import pickle

def test():
    
    onnx_model = onnx.load("222.onnx")
    print(onnx_model.opset_import[0].version)

    with open("inputs.pkl", "rb") as fp:
        inputs = pickle.load(fp)

    ort_session = onnxruntime.InferenceSession(
            onnx_model.SerializeToString(), providers=["CPUExecutionProvider"]
        )
    ort_output = ort_session.run([], inputs)
    
    print("ONNXRuntime:\n", ort_output)
    
    #--------------------------------------------
    
    ort_session = onnxruntime.InferenceSession(
            onnx_model.SerializeToString(), providers=["CUDAExecutionProvider"]
        )
    ort_output = ort_session.run([], inputs)
    
    print("ONNXRuntime:\n", ort_output)
    
if __name__ == "__main__":
    test()

testcase.zip

Urgency

No response

Platform

Linux

OS Version

Ubuntu 20.04

ONNX Runtime Installation

Released Package

ONNX Runtime Version or Commit ID

1.23.0.dev20250515001

ONNX Runtime API

Python

Architecture

X64

Execution Provider

Default CPU, CUDA

Execution Provider Library Version

CUDA: cuda-12.2.2::cuda-toolkit CUDNN: 9.1.1.17

Metadata

Metadata

Assignees

Labels

No labels
No labels

Type

No type

Projects

No projects

Milestone

No milestone

Relationships

None yet

Development

No branches or pull requests

Issue actions