Skip to content
New issue

Have a question about this project? Sign up for a free GitHub account to open an issue and contact its maintainers and the community.

By clicking “Sign up for GitHub”, you agree to our terms of service and privacy statement. We’ll occasionally send you account related emails.

Already on GitHub? Sign in to your account

[Feature Request] Add torch.Tensor support for InferenceSession input_feed #20481

Open
thiagocrepaldi opened this issue Apr 26, 2024 · 1 comment
Labels
feature request request for unsupported feature or enhancement

Comments

@thiagocrepaldi
Copy link
Contributor

Describe the feature request

Having ONNX Runtime using torch.Tensor (in addition to the current numpy) tensors is useful for the scenarios in which numpy does not support the data type used in the original torch model, such as torch.bfloat16.

Describe scenario use case

Today, we are forced to transform the onnx graph to convert bfloat16 into float16 due to umpy's lack of support for bfloat16

Supporting torch.Tensor direcly also makes ORT closer to PyTorch's original model, without numpy as a middle man

@thiagocrepaldi thiagocrepaldi added the feature request request for unsupported feature or enhancement label Apr 26, 2024
@wschin
Copy link
Contributor

wschin commented Apr 26, 2024

Workarounds for running ORT with PyTorch tensors -- #20281 & https://github.com/pytorch/pytorch/blob/4f29103749c5011529f1abb10b1508a682588909/torch/onnx/_internal/onnxruntime.py#L414 (if onnxruntime-training is installed). It's a helpful thing to implement along the way but right now, there is no ETA.

Sign up for free to join this conversation on GitHub. Already have an account? Sign in to comment
Labels
feature request request for unsupported feature or enhancement
Projects
None yet
Development

No branches or pull requests

2 participants