New issue
Have a question about this project? Sign up for a free GitHub account to open an issue and contact its maintainers and the community.
By clicking “Sign up for GitHub”, you agree to our terms of service and privacy statement. We’ll occasionally send you account related emails.
Already on GitHub? Sign in to your account
InternalError: Missing 0-th output from node model/layer_1/Conv2D_eightbit_requantize (defined at <ipython-input-6-2bddd853d111>:2) #17
Comments
@peiwenhuang27 any issues with "InternalError: Missing 0-th output from node model/layer_1/Conv2D_eightbit_requantize " message are because core/common_runtime/mkl_layout_pass.cc doesn't rewrite graph correctly. TF_ENABLE_MKL_NATIVE_FORMAT must be set always. The most likely case in your side should be those variables don't take effect in c++ code. |
Since I'm not sure if the issue is because of Colab, I tried running it in my local machine.
But the following error still occurs:
From the command line log (This TensorFlow binary is optimized with oneAPI Deep Neural Network Library (oneDNN) to use the following CPU instructions in performance-critical operations: AVX2 FMA), looks like the environment variables have been set successfully. |
if possible, could you pls share us your pb file and evaluation script? |
Due to some reasons, I cannot directly upload my files here. I have emailed the files to you, please check your inbox for them. Thank you so much, I truly appreciate it! |
By the way, I found the following in the release notes for Intel-Tensorflow 2.5:
It looks like |
Need to set the TF_ENABLE_MKL_NATIVE_FORMAT=0 for int8 model execution with intel-tensorflow 2.5.0 |
I see! Thanks, it works now with Intel-Tensorflow, I encountered the problem mainly because I wanted the model to be able to run in official Tensorflow without Intel-Tensorflow (for simplicity in further inference session that will be run in ML.Net) |
From offical TensorFlow 2.6, intel optimizations have been upstreamed into offical tensorflow. In the future, it will become default path on CPU version. |
I have the same error with tensorflow2.7, tfserving2.7.0-gpu
|
I have Fix it |
|
log is : tensorflow.python.framework.errors_impl.InternalError: 2 root error(s) found. |
This fix my issue. |
Case 1
Framework: Tensorflow 2.5.0, Intel-Tensorflow 2.5.0
Environment: Google Colab
I have a successfully quantized model that is to be run for inference without using LPOT API, so I wrote the following inference code:
When running the line
predictions = sess.run(output, {input_tensor_name: x})
:This error happens with or without
Intel-Tensorflow==2.5.0
installed, nor is it resolved whenos.environ['TF_ENABLE_ONEDNN_OPTS'] = '1'
is set explicitly.On the other hand, when I run the same code in VS Code with
Python 3.6.8 64-bit base: Conda
, it returns the same error message as in Case 2.Case 2
Framework: Tensorflow 2.4.0, Intel-Tensorflow 2.4.0
Environment: Google Colab
This case works well and prints out the MSE loss of the predictions, but when I uninstall
Intel-Tensorflow 2.4.0
and run it with official Tensorflow, while running the same line in Case 1 (predictions = sess.run(output, {input_tensor_name: x})
):The error persists even with
os.environ['TF_ENABLE_ONEDNN_OPTS'] = '1'
set explicitly.I believe both cases are caused by the same type of error, i.e. No OpKernel was registered to support Op ...
I was given to understand that with official
Tensorflow v2.5
installed and the environment variableTF_ENABLE_ONEDNN_OPTS=1
set (reference), the quantized model is supposed to run with oneDNN supported. But it doesn't seem to be the case in neither v2.4 nor v2.5.Not sure if this is the right place to post this issue, but I have nowhere else to report the problem as Intel-Tensorflow doesn't allow issue reporting and Tensorflow developers usually ignore issues dependent on other packages. Any hint is greatly appreciated, thank you.
The text was updated successfully, but these errors were encountered: