-
Notifications
You must be signed in to change notification settings - Fork 109
Fine tuned Keras VGGNet16 shows no performance advantages. #638
Comments
Whenever measuring the performance of AI models please note this :
So thumb rule => CPU : Concurrency :: GPU : Batching All these optimizations will obviously will not make a model faster on CPU because the utilisation will never exploit multiple cores. |
Hey @Narasimha1997, I do not understand why onnx does not make models faster. Huggingface uses onnx to run large pretained networks on CPU. So, can't I replicate the same using keras-onnx? Or do I have to use onnx models converted from pytorch models? |
When you use onnxruntime to evaluate performance (say run 100 times), please skip the first few runs (for example, 10 times) of evaluations. Especially for the first run, onnxruntime need do some extra work, so it costs much more time than usual. |
Hey @jiafatom, The results were smashing for lenet-type architecture (upto 177 times fast) using your method. But VGGNet shows NO improvement. Updated the notebook. |
For this perf issue, I feel that the converter already does its job well, and this is onnxruntime issue. You may need reach onnxruntime repo and post the question there. |
This is the comparision of raw VGG16 keras model inference time and the same model on onnx runtime. Why don't I see any performance advantages?
There is extremely small improvement
Replicate results by running this notebook on colab CPU
The text was updated successfully, but these errors were encountered: