Skip to content
New issue

Have a question about this project? Sign up for a free GitHub account to open an issue and contact its maintainers and the community.

By clicking “Sign up for GitHub”, you agree to our terms of service and privacy statement. We’ll occasionally send you account related emails.

Already on GitHub? Sign in to your account

[Bug?] Casting int8-->float #15492

Open
pauldog opened this issue Apr 13, 2023 · 1 comment
Open

[Bug?] Casting int8-->float #15492

pauldog opened this issue Apr 13, 2023 · 1 comment
Labels
ep:DML issues related to the DirectML execution provider platform:windows issues related to the Windows platform

Comments

@pauldog
Copy link

pauldog commented Apr 13, 2023

Describe the issue

I am getting a weird result using:

import torch
import torch.onnx as onnx

class ByteConverter(torch.nn.Module):
    def __init__(self):
        super(ByteConverter, self).__init__()

    def forward(self, input_array, scale, offset):
        # Convert input array to a PyTorch tensor
        input_tensor = input_array.to(torch.int8)

        # Apply scale and offset to input tensor
        input_tensor = (input_tensor.to(torch.float32) - offset) * scale

        # Convert input tensor to float16
        output_tensor = input_tensor.to(torch.float16)

        return output_tensor

# Instantiate an instance of ByteConverter with scale=1.0 and offset=0.0
model = ByteConverter()


# Define the input tensor
input_array = torch.tensor([1, 50, 100, 150, 200], dtype=torch.uint8)

# Define the scale and offset tensors
scale = torch.tensor([1])
offset = torch.tensor([0])

# Run the model with the input tensor, scale, and offset
output_tensor = model(input_array, scale, offset)

# Print the output tensor
print(output_tensor)


# Export the model to ONNX
input_array = torch.zeros((10), dtype=torch.uint8)  # Example input array
scale = torch.zeros((1), dtype=torch.float32)
offset = torch.zeros((1), dtype=torch.float32)
dynamic_axes = {"input_array": {0: "size"}}  # Specify dynamic axes for input_array
torch.onnx.export(
    model, 
    (input_array, scale, offset), 
    "model.onnx", 
    opset_version=14,
    input_names=["input_array", "scale", "offset"], 
    output_names=["output_tensor"], 
    dynamic_axes=dynamic_axes)

When I run this in Onnx runtime giving byte tensor of new byte[] { 3, 5, 100, 150, 200 }

I expect a float16 array of result [ 1., 50., 100., -106., -56.]

Instead I get the result of [3, 5, 100, 150, 200]

The expected result is that an int8 of -100 gets cast to a float of -100.0

To reproduce

run this python file to build the onnx and feed in a byte tensor, a scale=1 and offset=0.

Same results in both CPU and DML mode.

The graph seems fine:

image

Urgency

No response

Platform

Windows

OS Version

Windows 10

ONNX Runtime Installation

Built from Source

ONNX Runtime Version or Commit ID

1.15

ONNX Runtime API

C#

Architecture

X64

Execution Provider

DirectML

Execution Provider Library Version

DirectML

@github-actions github-actions bot added ep:DML issues related to the DirectML execution provider platform:windows issues related to the Windows platform labels Apr 13, 2023
@pauldog
Copy link
Author

pauldog commented Apr 13, 2023

I am having to do a manual conversion using:

        input_tensor = input_array.to(torch.int16)
        input_tensor = input_tensor - (input_tensor>=128)*256

instead of .to(torch.int8)

Sign up for free to join this conversation on GitHub. Already have an account? Sign in to comment
Labels
ep:DML issues related to the DirectML execution provider platform:windows issues related to the Windows platform
Projects
None yet
Development

No branches or pull requests

1 participant