Ability to interact with ONNX custom models
Hi,
I was looking to write a custom simple server where I could deploy my onnx optimized LLMs by olive or openvino. It is stated that I can intereact with open-source models, but all I can see is API providers. Am I missing something? Does it makes sense to have a developer setting to drop onnx model directly on settings configuration and have openvino runtime for that?