Skip to content

issues Search Results · repo:microsoft/onnxruntime language:C++

Filter by

7k results
 (82 ms)

7k results

inmicrosoft/onnxruntime (press backspace or delete to remove)

Describe the issue I m trying using Numpy.NET and ONNXRuntime C# to inference model. It s actually runs on .NET with result shape(1,3590,768) but in Python is (1,7180,768). To reproduce Here is code ...
api:CSharp
.NET
  • ElinLiu0
  • Opened 
    2 hours ago
  • #23883

I am trying to convert my model to onnxruntime, and my model itself has been int8 quantized. During the runtime, the following error occurred NOT_IMPLEMENTED : Could not find an implementation for Add(14) ...
quantization
  • jungyin
  • 2
  • Opened 
    15 hours ago
  • #23879

https://github.com/onnx/onnx/issues/6735
  • IamJiangBo
  • 1
  • Opened 
    16 hours ago
  • #23878

Describe the issue I have the following simple ONNX model with a single Abs node operating on BF16 tensor. ir_version: 10 producer_name: producer_version: graph { node { input: X output: ...
  • yuanyao-nv
  • 1
  • Opened 
    20 hours ago
  • #23875

Describe the feature request Are there plans to add multi GPU support? Describe scenario use case Allow rendering jobs to run across multi GPU systems.
feature request
  • makoshark2001
  • Opened 
    20 hours ago
  • #23874

Describe the issue When an application calls SessionOptionsAppendExecutionProvider_OpenVINO with only the device_type session option and no other options, the OpenVINO EP automatically tries to open a ...
ep:OpenVINO
  • ashrit-ms
  • Opened 
    23 hours ago
  • #23871

Describe the issue onnxruntime-qnn :1.20.2 I am using python to profile the models. I have a sample code below : provider_option = { backend_path : QnnHtp.dll , htp_performance_mode : sustained_high_performance ...
ep:QNN
  • DavidLuong98
  • 3
  • Opened 
    yesterday
  • #23869

Describe the issue Preprocess raises an exception in models with negative axis To reproduce import onnx model = onnx.parser.parse_model( ir_version: 8, opset_import: [ : 20 ] onnx_mock_model ...
quantization
  • Johansmm
  • Opened 
    yesterday
  • #23868

Describe the issue on arm sve256, I inferenced a srgan model with onnxruntime,but found the inference process has consumed a lot of memory. Specifically, a 1.4M onnx model inference with fp16,consumes ...
performance
  • Serenagirl
  • 2
  • Opened 
    yesterday
  • #23867

Describe the issue When building with openvino the build does not complete on a undeclared variable. I guess the change should be to change device_type to default_device, in the code below https://github.com/microsoft/onnxruntime/blob/99c51a326e0ff54a56e7b194204d459932084408/onnxruntime/core/providers/openvino/openvino_provider_factory.cc#L90-L100 ...
build
ep:OpenVINO
  • dwffls
  • Opened 
    yesterday
  • #23866
Issue origami icon

Learn how you can use GitHub Issues to plan and track your work.

Save views for sprints, backlogs, teams, or releases. Rank, sort, and filter issues to suit the occasion. The possibilities are endless.Learn more about GitHub Issues
ProTip! 
Press the
/
key to activate the search input again and adjust your query.
Issue origami icon

Learn how you can use GitHub Issues to plan and track your work.

Save views for sprints, backlogs, teams, or releases. Rank, sort, and filter issues to suit the occasion. The possibilities are endless.Learn more about GitHub Issues
ProTip! 
Restrict your search to the title by using the in:title qualifier.
Issue search results · GitHub