Issues: microsoft/onnxruntime
New issue
Have a question about this project? Sign up for a free GitHub account to open an issue and contact its maintainers and the community.
By clicking “Sign up for GitHub”, you agree to our terms of service and privacy statement. We’ll occasionally send you account related emails.
Already on GitHub? Sign in to your account
Author
Label
Projects
Milestones
Assignee
Sort
Issues list
how to release gpu memory after session.run
ep:CUDA
issues related to the CUDA execution provider
#20517
opened Apr 30, 2024 by
ZTurboX
Onnx model throws an exception in 1.17.3 but works in 1.16.x
platform:windows
issues related to the Windows platform
#20514
opened Apr 29, 2024 by
nmetulev
CMake install and Release Zip have folder structures that are not consistent
platform:windows
issues related to the Windows platform
#20510
opened Apr 29, 2024 by
yuslepukhin
NVIDIA Jetson aarch64 official PyPi binaries for onnxruntime-gpu
platform:jetson
issues related to the NVIDIA Jetson platform
#20503
opened Apr 29, 2024 by
lakshanthad
[Performance] Session Creation Time is excessively slow on DirectML devices and Universal Windows projects
ep:DML
issues related to the DirectML execution provider
platform:windows
issues related to the Windows platform
#20502
opened Apr 29, 2024 by
MatheusAD
use multi ort session in one process, can not improve throughput
ep:CUDA
issues related to the CUDA execution provider
#20494
opened Apr 28, 2024 by
ccccjunkang
[Feature Request] Add torch.Tensor support for InferenceSession input_feed
feature request
request for unsupported feature or enhancement
#20481
opened Apr 26, 2024 by
thiagocrepaldi
Quantized model show different results between x86_64 and aarch64 (CPU)
quantization
issues related to quantization
#20479
opened Apr 26, 2024 by
oewi
output is different between onnx and model
ep:CUDA
issues related to the CUDA execution provider
#20478
opened Apr 26, 2024 by
MichaelH717
[Build] cmake duplicate target "memory" between abseil and xnnpack
build
build issues; typically submitted using template
release:1.17.3
#20469
opened Apr 25, 2024 by
sirdeniel
[Build] Shared lib testing for all built EPs
build
build issues; typically submitted using template
ep:DML
issues related to the DirectML execution provider
ep:TensorRT
issues related to TensorRT execution provider
#20468
opened Apr 25, 2024 by
gedoensmax
onnxruntime + openvino need double memory compared with openvino-only
ep:OpenVINO
issues related to OpenVINO execution provider
platform:windows
issues related to the Windows platform
#20467
opened Apr 25, 2024 by
busishengui
[Documentation Request] The web "Get started" document and the "js" folder contradict to each other
api:Javascript
issues related to the Javascript API
documentation
improvements or additions to documentation; typically submitted using template
platform:web
issues related to ONNX Runtime web; typically submitted using template
#20465
opened Apr 25, 2024 by
tibortakacs
RUNTIME_EXCEPTION, 80070057 The parameter is incorrect in v1.17.3
ep:DML
issues related to the DirectML execution provider
platform:windows
issues related to the Windows platform
#20464
opened Apr 25, 2024 by
Rikyf3
Uncaught (in promise) Error: no available backend found. ERR: [wasm] TypeError: Cannot read properties of undefined (reading 'buffer'), [cpu] Error: previous call to 'initializeWebAssembly()' failed., [xnnpack] Error: previous call to 'initializeWebAssembly()' failed.[Web]
platform:web
issues related to ONNX Runtime web; typically submitted using template
#20463
opened Apr 25, 2024 by
dongxingwangna
[Build] [ONNXRuntimeError] : 1 : FAIL : Load model from model.pre_post_process.onnx failed:Node (post_process_4) Op (Split) [ShapeInferenceError] Mismatch between the sum of 'split' (84) and the split dimension of the input (6).
build
build issues; typically submitted using template
#20462
opened Apr 25, 2024 by
dadanugm
[Build] cross-compiling onnxruntime for arm32 and onnxruntime_ENABLE_CPUINFO not working.
build
build issues; typically submitted using template
#20461
opened Apr 25, 2024 by
lsjws2008
Dockerfile does not work
build
build issues; typically submitted using template
ep:TensorRT
issues related to TensorRT execution provider
#20458
opened Apr 25, 2024 by
PredyDaddy
[Performance] VRAM usage difference between TRT-EP and native TRT
ep:CUDA
issues related to the CUDA execution provider
ep:TensorRT
issues related to TensorRT execution provider
performance
issues related to performance regressions
#20457
opened Apr 25, 2024 by
omerferhatt
Phi-3 can't deal with Japanese. How can I solve this issue?
platform:windows
issues related to the Windows platform
#20448
opened Apr 24, 2024 by
Hideki105
[Web] invalid data location: undefined
platform:web
issues related to ONNX Runtime web; typically submitted using template
#20431
opened Apr 23, 2024 by
iishiishii
NaN Outputs in ONNX Runtime When Weights Initialized with large constants
release:1.17.3
#20429
opened Apr 23, 2024 by
daniyalaliev
Dnnl Execution Provider GetMemoryAndReshape function issues with Status Message: not a valid reshape, inconsistent dim product
ep:oneDNN
questions/issues related to DNNL EP
#20426
opened Apr 23, 2024 by
varunkatiyar819
[bug] 'XLMRobertaTokenizerFast' object has no attribute 'max_model_input_sizes'
#20419
opened Apr 23, 2024 by
skyline75489
onnxruntime 1.17.3 is missing from cuda 12 artifacts feed
ep:CUDA
issues related to the CUDA execution provider
release:1.17.3
#20409
opened Apr 22, 2024 by
NarutoUA
Previous Next
ProTip!
Type g i on any issue or pull request to go back to the issue listing page.