Skip to content

When cast a float tensor to uint32, different results are produced by the CPUExecutionProvider and CUDAExecutionProvider #25207

Open
@coffezhou

Description

@coffezhou

Describe the issue

For the following onnx model,

Image
the results are different for the CPUExecutionProvider and CUDAExecutionProvider.
For the CPUExecutionProvider, the results are as follows:

CPU_ONNXRuntime:
 [array([[[[         0,          0,          0,          0],
         [4294967295, 4294967295, 4294967295, 4294967295],
         [         0,          0,          1, 4294967295],
         [         1,          0,          0,          0]],

        [[         0,          1, 4294967295,          0],
         [         0, 4294967295, 4294967295,          0],
         [         0,          1,          0,          1],
         [         0,          0,          0,          1]],

        [[         1, 4294967295,          0,          1],
         [4294967295, 4294967295, 4294967295,          1],
         [         0,          0,          0, 4294967295],
         [         1,          0,          0,          0]]]], dtype=uint32)]

while the results for the CUDAExecutionProvider are as follows:

CUDA_ONNXRuntime:
 [array([[[[0, 0, 0, 0],
         [0, 0, 0, 0],
         [0, 0, 1, 0],
         [1, 0, 0, 0]],

        [[0, 1, 0, 0],
         [0, 0, 0, 0],
         [0, 1, 0, 1],
         [0, 0, 0, 1]],

        [[1, 0, 0, 1],
         [0, 0, 0, 1],
         [0, 0, 0, 0],
         [1, 0, 0, 0]]]], dtype=uint32)]

In the above results, 4294967295 is contained in the results of the CPUExecutionProvider, which may be a wrong results.

I also try to compare the results produced by the MeanVarianceNormalization operator, the results for both CPUExecutionProvider and CUDAExecutionProvider are as follows:

 [array([[[[-0.0890839 , -0.136558  , -0.95021546, -0.7001586 ],
         [-1.1933007 , -1.094313  , -1.0639392 , -1.0248872 ],
         [ 0.95923764,  0.8538164 ,  1.739659  , -1.0859296 ],
         [ 1.4104558 ,  0.9748865 ,  0.67383343,  0.7264987 ]],

        [[ 0.9273485 ,  1.0028956 , -1.3699627 ,  0.57715166],
         [-0.70073557, -1.1512839 , -1.0428492 , -0.82170874],
         [ 0.288313  ,  1.4511124 , -0.66509026,  1.2648642 ],
         [ 0.5702176 , -0.8179674 , -0.98107314,  1.4687742 ]],

        [[ 1.0109633 , -1.0802593 , -0.43351302,  1.198949  ],
         [-1.0333213 , -1.2331047 , -1.0896473 ,  1.295364  ],
         [-0.65188074, -0.2813885 ,  0.59543145, -1.2329993 ],
         [ 1.8418806 ,  0.7884036 ,  0.5349041 , -0.22978288]]]],
      dtype=float32)]

So I guess that the issue is caused by the Cast operator.

To reproduce

Environment

OS: Ubuntu 20.04
onnxruntime: 1.23.0.dev20250626001
CUDA: cuda-12.2.2::cuda-toolkit
CUDNN: 9.1.1.17
NVIDIA GPU: GeForce RTX 3080
NVIDIA Driver Version: 535.183.01
Python Version: 3.12.9

Steps to reproduce

This bug can be reproduced by the following code with the model in the attachment.

from typing import Dict, List, Literal, Optional
import sys
import os

import numpy as np
import onnx
import onnxruntime
from onnx import ModelProto, TensorProto, helper, mapping

import pickle

def test():
    
    onnx_model = onnx.load("222.onnx")
    print(onnx_model.opset_import[0].version)

    with open("inputs.pkl", "rb") as fp:
        inputs = pickle.load(fp)

    ort_session = onnxruntime.InferenceSession(
            onnx_model.SerializeToString(), providers=["CPUExecutionProvider"]
        )
    cpu_ort_output = ort_session.run([], inputs)
    
    print("CPU_ONNXRuntime:\n", cpu_ort_output)
    
    #--------------------------------------------
    
    ort_session = onnxruntime.InferenceSession(
            onnx_model.SerializeToString(), providers=["CUDAExecutionProvider"]
        )
    cuda_ort_output = ort_session.run([], inputs)
    
    print("CUDA_ONNXRuntime:\n", cuda_ort_output)
    
    np.testing.assert_allclose(cuda_ort_output[0], cpu_ort_output[0], rtol=0.1, atol=0.1)
    
if __name__ == "__main__":
    test()

testcase.zip
In testcase.zip, '333.onnx' is reduced from '222.onnx' by removing the Cast operator.

Urgency

No response

Platform

Linux

OS Version

Ubuntu 20.04

ONNX Runtime Installation

Released Package

ONNX Runtime Version or Commit ID

1.23.0.dev20250626001

ONNX Runtime API

Python

Architecture

X64

Execution Provider

Default CPU, CUDA

Execution Provider Library Version

cuda-12.2.2::cuda-toolkit CUDNN: 9.1.1.17

Metadata

Metadata

Assignees

No one assigned

    Labels

    No labels
    No labels

    Type

    No type

    Projects

    No projects

    Milestone

    No milestone

    Relationships

    None yet

    Development

    No branches or pull requests

    Issue actions