Skip to content
This repository has been archived by the owner on May 31, 2024. It is now read-only.

Benchmark other onnx runtimes #129

Open
diegobernardes opened this issue Sep 17, 2019 · 2 comments
Open

Benchmark other onnx runtimes #129

diegobernardes opened this issue Sep 17, 2019 · 2 comments

Comments

@diegobernardes
Copy link
Collaborator

Is your feature request related to a problem? Please describe.
Would be nice to know the runtime performance of onnx-go compared with the other runtimes available. Here a list of some runtimes: https://onnx.ai/supported-tools

Describe the solution you'd like
Don't need to be nothing fancy, maybe a wiki page with some models being compared between the other implementations.

For some solutions, mainly edge, performance is really a key factor, this could be a really advantage to onnx-go because of the Go speed and concurrency. It will also highlight the parts we need to dedicate more effort to make it faster.

@owulveryck
Copy link
Owner

I fully agree.
The key point is to compare against Microsoft's onnxruntime.
The goal of onnxruntime is similar: to run pre-trained neural network without pain.

Other comparisons are also interesting. For info, there is a trivial bench in the examples/model_zoo_executor that runs a model against whatever onnx model referenced by the $MODELDIR environment variable.

@owulveryck
Copy link
Owner

owulveryck commented Sep 18, 2019

BTW, I had a discussion with @xadupre a while ago.
He mentioned one interesting project: https://github.com/sdpython/scikit-learn_benchmarks which is using asv (I love the name of the project...)

I did not have a closer look at the project yet, but if the effort is low, it could be a good idea to use asv for the benchmark.

What do you think?

Sign up for free to subscribe to this conversation on GitHub. Already have an account? Sign in.
Labels
None yet
Projects
None yet
Development

No branches or pull requests

2 participants