Skip to content

onnxruntime with the CPUExecutionProvider errors out while processing the ReverseSequence operator #24920

Open
@coffezhou

Description

@coffezhou

Describe the issue

For the following onnx model,

Image
it can be executed by the CUDAExecutionProvider. The outputs are as follows:

[array([[-2.7182608],
       [ 0.       ],
       [-4.6765337]], dtype=float32)]

However, when I run it using the CPUExecutionProvider, onnxruntime crashes as follows:

2025-06-01 19:53:44.177894133 [E:onnxruntime:, sequential_executor.cc:572 ExecuteKernel] Non-zero status code returned while running ReverseSequence node. Name:'ReverseSequenceNode' Status Message: Invalid sequence length: 2. Value must be in range [0,1]
Traceback (most recent call last):
  File "/home/carla/Documents/test/test.py", line 33, in <module>
    test()
  File "/home/carla/Documents/test/test.py", line 30, in test
    cpu_ort_output = cpu_ort_session.run([], inputs) 
                     ^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^
  File "/home/carla/anaconda3/envs/onnruntime-gpu/lib/python3.12/site-packages/onnxruntime/capi/onnxruntime_inference_collection.py", line 273, in run
    return self._sess.run(output_names, input_feed, run_options)
           ^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^
onnxruntime.capi.onnxruntime_pybind11_state.Fail: [ONNXRuntimeError] : 1 : FAIL : Non-zero status code returned while running ReverseSequence node. Name:'ReverseSequenceNode' Status Message: Invalid sequence length: 2. Value must be in range [0,1]

To reproduce

Environment

OS: Ubuntu 20.04
onnxruntime: 1.23.0.dev20250515001
CUDA: cuda-12.2.2::cuda-toolkit
CUDNN: 9.1.1.17
NVIDIA GPU: GeForce RTX 3080
NVIDIA Driver Version: 535.183.01
Python Version: 3.12.9

Steps to reproduce

This bug can be reproduced by the following code with the model in the attachment.

from typing import Dict, List, Literal, Optional
import sys

import numpy as np
import onnx
import onnxruntime

import pickle

def test():
    onnx_model = onnx.load("1.onnx")

    with open("inputs.pkl", "rb") as fp:
        inputs = pickle.load(fp)

    sess_options = onnxruntime.SessionOptions()              

    gpu_ort_session = onnxruntime.InferenceSession(
                              onnx_model.SerializeToString(), sess_options, providers=["CUDAExecutionProvider"]
                      )
                  
    gpu_ort_output = gpu_ort_session.run([], inputs)
    
    print(gpu_ort_output)
    #------------------------------------------------------
    cpu_ort_session = onnxruntime.InferenceSession(
                              onnx_model.SerializeToString(), sess_options, providers=["CPUExecutionProvider"]
                          )  
                          
    cpu_ort_output = cpu_ort_session.run([], inputs) 
    
if __name__ == "__main__":
    test()
    

testcase.zip

Urgency

No response

Platform

Linux

OS Version

Ubuntu 20.04

ONNX Runtime Installation

Released Package

ONNX Runtime Version or Commit ID

1.23.0.dev20250515001

ONNX Runtime API

Python

Architecture

X64

Execution Provider

CUDA, Default CPU

Execution Provider Library Version

cuda-12.2.2::cuda-toolkit, cudnn-9.1.1.17

Metadata

Metadata

Assignees

No one assigned

    Labels

    ep:CUDAissues related to the CUDA execution provider

    Type

    No type

    Projects

    No projects

    Milestone

    No milestone

    Relationships

    None yet

    Development

    No branches or pull requests

    Issue actions