We read every piece of feedback, and take your input very seriously.
To see all available qualifiers, see our documentation.
Have a question about this project? Sign up for a free GitHub account to open an issue and contact its maintainers and the community.
By clicking “Sign up for GitHub”, you agree to our terms of service and privacy statement. We’ll occasionally send you account related emails.
Already on GitHub? Sign in to your account
I am getting a weird result using:
import torch import torch.onnx as onnx class ByteConverter(torch.nn.Module): def __init__(self): super(ByteConverter, self).__init__() def forward(self, input_array, scale, offset): # Convert input array to a PyTorch tensor input_tensor = input_array.to(torch.int8) # Apply scale and offset to input tensor input_tensor = (input_tensor.to(torch.float32) - offset) * scale # Convert input tensor to float16 output_tensor = input_tensor.to(torch.float16) return output_tensor # Instantiate an instance of ByteConverter with scale=1.0 and offset=0.0 model = ByteConverter() # Define the input tensor input_array = torch.tensor([1, 50, 100, 150, 200], dtype=torch.uint8) # Define the scale and offset tensors scale = torch.tensor([1]) offset = torch.tensor([0]) # Run the model with the input tensor, scale, and offset output_tensor = model(input_array, scale, offset) # Print the output tensor print(output_tensor) # Export the model to ONNX input_array = torch.zeros((10), dtype=torch.uint8) # Example input array scale = torch.zeros((1), dtype=torch.float32) offset = torch.zeros((1), dtype=torch.float32) dynamic_axes = {"input_array": {0: "size"}} # Specify dynamic axes for input_array torch.onnx.export( model, (input_array, scale, offset), "model.onnx", opset_version=14, input_names=["input_array", "scale", "offset"], output_names=["output_tensor"], dynamic_axes=dynamic_axes)
When I run this in Onnx runtime giving byte tensor of new byte[] { 3, 5, 100, 150, 200 }
new byte[] { 3, 5, 100, 150, 200 }
I expect a float16 array of result [ 1., 50., 100., -106., -56.]
[ 1., 50., 100., -106., -56.]
Instead I get the result of [3, 5, 100, 150, 200]
[3, 5, 100, 150, 200]
The expected result is that an int8 of -100 gets cast to a float of -100.0
run this python file to build the onnx and feed in a byte tensor, a scale=1 and offset=0.
Same results in both CPU and DML mode.
The graph seems fine:
No response
Windows
Windows 10
Built from Source
1.15
C#
X64
DirectML
The text was updated successfully, but these errors were encountered:
I am having to do a manual conversion using:
input_tensor = input_array.to(torch.int16) input_tensor = input_tensor - (input_tensor>=128)*256
instead of .to(torch.int8)
.to(torch.int8)
Sorry, something went wrong.
No branches or pull requests
Describe the issue
I am getting a weird result using:
When I run this in Onnx runtime giving byte tensor of
new byte[] { 3, 5, 100, 150, 200 }
I expect a float16 array of result
[ 1., 50., 100., -106., -56.]
Instead I get the result of
[3, 5, 100, 150, 200]
The expected result is that an int8 of -100 gets cast to a float of -100.0
To reproduce
run this python file to build the onnx and feed in a byte tensor, a scale=1 and offset=0.
Same results in both CPU and DML mode.
The graph seems fine:
Urgency
No response
Platform
Windows
OS Version
Windows 10
ONNX Runtime Installation
Built from Source
ONNX Runtime Version or Commit ID
1.15
ONNX Runtime API
C#
Architecture
X64
Execution Provider
DirectML
Execution Provider Library Version
DirectML
The text was updated successfully, but these errors were encountered: