New issue
Have a question about this project? Sign up for a free GitHub account to open an issue and contact its maintainers and the community.
By clicking “Sign up for GitHub”, you agree to our terms of service and privacy statement. We’ll occasionally send you account related emails.
Already on GitHub? Sign in to your account
Issue on how to get started #85
Comments
Hello. pls ensure the framework version you are using is the one INC supports. from the error log "Please install Intel® Optimizations for TensorFlow or MKL enabled TensorFlow from source code within version >=1.14.0 and <=2.8.0", you didn't install official tensorflow with oneDNN enabled or intel tensorflow. as for the second issue, could you pls paste the whole log after you installed intel-tensorflow 2.8.0? |
Guys I using your instructions on the main page https://github.com/intel/neural-compressor and they don't work as you wrote the 1st issue diasappeared with pip install intel-tensorflow==2.8.0 the second issue the full log is (by the way you have the notebook here so you can reproduce 2022-05-20 14:31:52 [INFO] Generating grammar tables from /usr/lib/python3.7/lib2to3/Grammar.txt During handling of the above exception, another exception occurred: Traceback (most recent call last): |
your instructions https://github.com/intel/neural-compressor are not reproduceable as they are |
If I add 2 more cellsAn ONNX Example!pip install onnx==1.9.0 onnxruntime==1.10.0 onnxruntime-extensions Prepare fp32 model!wget https://github.com/onnx/models/blob/main/vision/classification/resnet/model/resnet50-v1-12.onnxfrom neural_compressor.experimental import Quantization, common I got 2022-05-20 15:01:37 [WARNING] Force convert framework model to neural_compressor model.AssertionError Traceback (most recent call last) 2 frames AssertionError: Framework is not detected correctly from model format. |
default_netcompressor.zip |
|
as for your onnx model error, it fails when loading this onnx model by onnx runtime. "onnxruntime.capi.onnxruntime_pybind11_state.InvalidProtobuf: [ONNXRuntimeError] : 7 : INVALID_PROTOBUF : Load model from /home/ftian/resnet50-v1-12.onnx failed:Protobuf parsing failed." It should be some compatibility issue existing in onnx runtime and this onnx model. it's not INC issue. |
It should be some compatibility issue existing in onnx runtime and this onnx model. it's not INC issue. so why you indicate in your main page if instructions are not accurately reproduceable ? |
I am totally confused, could you pls let me know which instruction/example is not reproducible? |
You are confused because you did not see the notebook I share 2 times putting there the instructions from https://github.com/intel/neural-compressor |
what I need is
|
Danilo, I saw your notebook and mentioned the root cause in the thread. This is a known issue and was raised before #35 if you are saying instructions of some examples are wrong or not able to reproduce, pls let me know the exact place. The main page of INC is just used to tell users how to install INC. That's why I am confused on what you say "main page is not reproducible". Detailed examples pls refer to examples/ directory. those are all reproducible in linux bash env. as for jupyter notebook, only tensorflow model has such issue. I would suggest you following the instructions in examples/ directory for model downloading and quantization. you can run same instructions in Juptyer notebook without issue. |
@danilopau Hi, pls try wget https://github.com/onnx/models/raw/main/vision/classification/resnet/model/resnet50-v1-12.onnx to download the onnx model file. Sorry for the link error and we will update it soon. |
@mengniwang95 thanks
|
Hi @danilopau , for onnx model, we support 3 frameworks: onnxrt_qlinearops & onnxrt_qintegerops & onnxrt_qdqops, they will use different int8 ops. from neural_compressor import conf
conf.model.framework = 'onnxrt_qlinearops'
quantizer = Quantization(conf) |
@mengniwang95 python --version >> Python 3.7.13 unfortunately I got another error. env: TF_ENABLE_ONEDNN_OPTS=1 During handling of the above exception, another exception occurred: Traceback (most recent call last): |
I am a little confused, your log shows you are quantizing a tf model but not an onnx model. conf.tuning.exit_policy.performance_only = True |
@mengniwang95 During handling of the above exception, another exception occurred: Traceback (most recent call last): |
and following your advice I run from neural_compressor import conf 2022-05-21 10:24:21 [INFO] NumExpr defaulting to 2 threads. During handling of the above exception, another exception occurred: Traceback (most recent call last): |
with that sequence under Python 3.7.13 I was able to generate optimized, renamed and augmented onnx model pip install neural-compressor An ONNX Example!pip install onnx==1.9.0 onnxruntime==1.10.0 onnxruntime-extensions Prepare fp32 model!wget https://github.com/onnx/models/raw/main/vision/classification/resnet/model/resnet50-v1-12.onnx conf.model.framework = 'onnxrt_qlinearops' |
@danilopau hi, this error is caused by your input shape, according to the model it should be (1, 3, 224, 224) |
@mengniwang95 An ONNX Example!pip install onnx==1.9.0 onnxruntime==1.10.0 onnxruntime-extensions Prepare fp32 model!wget https://github.com/onnx/models/raw/main/vision/classification/resnet/model/resnet50-v1-12.onnx conf.model.framework = 'onnxrt_qlinearops' |
the conf.yaml is model: evaluation: tuning: |
Did it raise any error info? model = quantizer.fit()
model.save(output_path) |
@mengniwang95 thanks again for your great support. from neural_compressor import conf conf.model.framework = 'onnxrt_qlinearops' ---- log is --- 2022-05-22 14:10:40 [INFO] NumExpr defaulting to 2 threads. |
@danilopau Hi, sorry for my late reply, I tried in my local env with your code + yaml and the saved model has int8 nodes like below: |
@mengniwang95 |
Actually the process is same as you set in colab. Just install needed py pkgs |
Hello on the main pages there are instruction to apply. I put those in a notebook however they don't work.
with pip install tensorflow as you wrote I got
ValueError: Please install Intel® Optimizations for TensorFlow or MKL enabled TensorFlow from source code within version >=1.14.0 and <=2.8.0.
2022-05-20 14:25:56 [ERROR] Specified timeout or max trials is reached! Not found any quantized model which meet accuracy goal. Exit.
Here https://www.intel.com/content/www/us/en/developer/articles/guide/optimization-for-tensorflow-installation-guide.html
it is said to perform
pip install intel-tensorflow==2.8.0
but then another error appear
2022-05-20 14:31:58 [ERROR] Specified timeout or max trials is reached! Not found any quantized model which meet accuracy goal. Exit.
Could you be please precise
default_netcompressor.zip
so that results are reproduceable ?
Thanks
The text was updated successfully, but these errors were encountered: