Skip to content
New issue

Have a question about this project? Sign up for a free GitHub account to open an issue and contact its maintainers and the community.

By clicking “Sign up for GitHub”, you agree to our terms of service and privacy statement. We’ll occasionally send you account related emails.

Already on GitHub? Sign in to your account

Conversion problem with a trained model based on "Faster R-CNN Inception ResNet V2 1024x1024" #7991

Closed
dorovl opened this issue Oct 13, 2021 · 10 comments
Assignees

Comments

@dorovl
Copy link

dorovl commented Oct 13, 2021

System information (version)
Detailed description

I trained a model based on "Faster R-CNN Inception ResNet V2 1024x1024". The model was exported OK, it works fine with TensorFlow Object Detection API (detect_fn), but trying to it convert gives the errors below. Please note that models like "SSD MobileNet v2", "SSD MobileNet V2 FPNLite", "SSD ResNet50 V1 FPN" and "EfficientDet D4" were working fine.
You can also find here my trained model if you want to reproduce: https://1drv.ms/u/s!AgF38cSpviIRga4-m4j-9BkYw9R_oA?e=0VEAB9

Steps to reproduce
Method 1 (without the --input_shape '(1333, 1333)')
%cd /content/TensorFlow/workspace/training/exported-models/my_{MODEL}
!mkdir -p saved_model_10k
!source /opt/intel/openvino_2021/bin/setupvars.sh && \
    python /opt/intel/openvino_2021/deployment_tools/model_optimizer/mo.py \
    --saved_model_dir saved_model \
    --transformations_config /opt/intel/openvino_2021/deployment_tools/model_optimizer/extensions/front/tf/faster_rcnn_support_api_v2.4.json \
    --tensorflow_object_detection_api_pipeline_config pipeline.config \
    --input_checkpoint /content/TensorFlow/workspace/training/exported-models/my_{MODEL}/checkpoint \
    --reverse_input_channels \
    --output_dir saved_model_10k \
    --data_type FP16
    
/content/TensorFlow/workspace/training/exported-models/my_faster_rcnn_inception_resnet_v2_1024x1024_coco17_tpu-8
[setupvars.sh] OpenVINO environment initialized
Model Optimizer arguments:
Common parameters:
	- Path to the Input Model: 	None
	- Path for generated IR: 	/content/TensorFlow/workspace/training/exported-models/my_faster_rcnn_inception_resnet_v2_1024x1024_coco17_tpu-8/saved_model_10k
	- IR output name: 	saved_model
	- Log level: 	ERROR
	- Batch: 	Not specified, inherited from the model
	- Input layers: 	Not specified, inherited from the model
	- Output layers: 	Not specified, inherited from the model
	- Input shapes: 	Not specified, inherited from the model
	- Mean values: 	Not specified
	- Scale values: 	Not specified
	- Scale factor: 	Not specified
	- Precision of IR: 	FP16
	- Enable fusing: 	True
	- Enable grouped convolutions fusing: 	True
	- Move mean values to preprocess section: 	None
	- Reverse input channels: 	True
TensorFlow specific parameters:
	- Input model in text protobuf format: 	False
	- Path to model dump for TensorBoard: 	None
	- List of shared libraries with TensorFlow custom layers implementation: 	None
	- Update the configuration file with input/output node names: 	None
	- Use configuration file used to generate the model with Object Detection API: 	/content/TensorFlow/workspace/training/exported-models/my_faster_rcnn_inception_resnet_v2_1024x1024_coco17_tpu-8/pipeline.config
	- Use the config file: 	None
	- Inference Engine found in: 	/opt/intel/openvino_2021/python/python3.7/openvino
Inference Engine version: 	2021.4.1-3926-14e67d86634-releases/2021/4
Model Optimizer version: 	2021.4.1-3926-14e67d86634-releases/2021/4
2021-10-13 19:38:02.919592: I tensorflow/stream_executor/platform/default/dso_loader.cc:49] Successfully opened dynamic library libcudart.so.11.0
/usr/local/lib/python3.7/dist-packages/tensorflow/python/autograph/impl/api.py:22: DeprecationWarning: the imp module is deprecated in favour of importlib; see the module's documentation for alternative uses
  import imp
2021-10-13 19:38:05.604898: I tensorflow/compiler/jit/xla_cpu_device.cc:41] Not creating XLA devices, tf_xla_enable_xla_devices not set
2021-10-13 19:38:05.605789: I tensorflow/stream_executor/platform/default/dso_loader.cc:49] Successfully opened dynamic library libcuda.so.1
2021-10-13 19:38:05.611839: I tensorflow/stream_executor/cuda/cuda_gpu_executor.cc:941] successful NUMA node read from SysFS had negative value (-1), but there must be at least one NUMA node, so returning NUMA node zero
2021-10-13 19:38:05.612338: I tensorflow/core/common_runtime/gpu/gpu_device.cc:1720] Found device 0 with properties: 
pciBusID: 0000:00:04.0 name: Tesla T4 computeCapability: 7.5
coreClock: 1.59GHz coreCount: 40 deviceMemorySize: 14.75GiB deviceMemoryBandwidth: 298.08GiB/s
2021-10-13 19:38:05.612378: I tensorflow/stream_executor/platform/default/dso_loader.cc:49] Successfully opened dynamic library libcudart.so.11.0
2021-10-13 19:38:05.615310: I tensorflow/stream_executor/platform/default/dso_loader.cc:49] Successfully opened dynamic library libcublas.so.11
2021-10-13 19:38:05.615382: I tensorflow/stream_executor/platform/default/dso_loader.cc:49] Successfully opened dynamic library libcublasLt.so.11
2021-10-13 19:38:05.616553: I tensorflow/stream_executor/platform/default/dso_loader.cc:49] Successfully opened dynamic library libcufft.so.10
2021-10-13 19:38:05.616953: I tensorflow/stream_executor/platform/default/dso_loader.cc:49] Successfully opened dynamic library libcurand.so.10
2021-10-13 19:38:05.618979: I tensorflow/stream_executor/platform/default/dso_loader.cc:49] Successfully opened dynamic library libcusolver.so.10
2021-10-13 19:38:05.619771: I tensorflow/stream_executor/platform/default/dso_loader.cc:49] Successfully opened dynamic library libcusparse.so.11
2021-10-13 19:38:05.620003: I tensorflow/stream_executor/platform/default/dso_loader.cc:49] Successfully opened dynamic library libcudnn.so.8
2021-10-13 19:38:05.620123: I tensorflow/stream_executor/cuda/cuda_gpu_executor.cc:941] successful NUMA node read from SysFS had negative value (-1), but there must be at least one NUMA node, so returning NUMA node zero
2021-10-13 19:38:05.620603: I tensorflow/stream_executor/cuda/cuda_gpu_executor.cc:941] successful NUMA node read from SysFS had negative value (-1), but there must be at least one NUMA node, so returning NUMA node zero
2021-10-13 19:38:05.620998: I tensorflow/core/common_runtime/gpu/gpu_device.cc:1862] Adding visible gpu devices: 0
2021-10-13 19:38:05.621312: I tensorflow/core/platform/cpu_feature_guard.cc:142] This TensorFlow binary is optimized with oneAPI Deep Neural Network Library (oneDNN) to use the following CPU instructions in performance-critical operations:  AVX2 FMA
To enable them in other operations, rebuild TensorFlow with the appropriate compiler flags.
2021-10-13 19:38:05.621524: I tensorflow/compiler/jit/xla_gpu_device.cc:99] Not creating XLA devices, tf_xla_enable_xla_devices not set
2021-10-13 19:38:05.621648: I tensorflow/stream_executor/cuda/cuda_gpu_executor.cc:941] successful NUMA node read from SysFS had negative value (-1), but there must be at least one NUMA node, so returning NUMA node zero
2021-10-13 19:38:05.622091: I tensorflow/core/common_runtime/gpu/gpu_device.cc:1720] Found device 0 with properties: 
pciBusID: 0000:00:04.0 name: Tesla T4 computeCapability: 7.5
coreClock: 1.59GHz coreCount: 40 deviceMemorySize: 14.75GiB deviceMemoryBandwidth: 298.08GiB/s
2021-10-13 19:38:05.622129: I tensorflow/stream_executor/platform/default/dso_loader.cc:49] Successfully opened dynamic library libcudart.so.11.0
2021-10-13 19:38:05.622163: I tensorflow/stream_executor/platform/default/dso_loader.cc:49] Successfully opened dynamic library libcublas.so.11
2021-10-13 19:38:05.622188: I tensorflow/stream_executor/platform/default/dso_loader.cc:49] Successfully opened dynamic library libcublasLt.so.11
2021-10-13 19:38:05.622218: I tensorflow/stream_executor/platform/default/dso_loader.cc:49] Successfully opened dynamic library libcufft.so.10
2021-10-13 19:38:05.622240: I tensorflow/stream_executor/platform/default/dso_loader.cc:49] Successfully opened dynamic library libcurand.so.10
2021-10-13 19:38:05.622262: I tensorflow/stream_executor/platform/default/dso_loader.cc:49] Successfully opened dynamic library libcusolver.so.10
2021-10-13 19:38:05.622284: I tensorflow/stream_executor/platform/default/dso_loader.cc:49] Successfully opened dynamic library libcusparse.so.11
2021-10-13 19:38:05.622307: I tensorflow/stream_executor/platform/default/dso_loader.cc:49] Successfully opened dynamic library libcudnn.so.8
2021-10-13 19:38:05.622377: I tensorflow/stream_executor/cuda/cuda_gpu_executor.cc:941] successful NUMA node read from SysFS had negative value (-1), but there must be at least one NUMA node, so returning NUMA node zero
2021-10-13 19:38:05.622825: I tensorflow/stream_executor/cuda/cuda_gpu_executor.cc:941] successful NUMA node read from SysFS had negative value (-1), but there must be at least one NUMA node, so returning NUMA node zero
2021-10-13 19:38:05.623232: I tensorflow/core/common_runtime/gpu/gpu_device.cc:1862] Adding visible gpu devices: 0
2021-10-13 19:38:05.623282: I tensorflow/stream_executor/platform/default/dso_loader.cc:49] Successfully opened dynamic library libcudart.so.11.0
2021-10-13 19:38:06.280003: I tensorflow/core/common_runtime/gpu/gpu_device.cc:1261] Device interconnect StreamExecutor with strength 1 edge matrix:
2021-10-13 19:38:06.280069: I tensorflow/core/common_runtime/gpu/gpu_device.cc:1267]      0 
2021-10-13 19:38:06.280079: I tensorflow/core/common_runtime/gpu/gpu_device.cc:1280] 0:   N 
2021-10-13 19:38:06.280266: I tensorflow/stream_executor/cuda/cuda_gpu_executor.cc:941] successful NUMA node read from SysFS had negative value (-1), but there must be at least one NUMA node, so returning NUMA node zero
2021-10-13 19:38:06.280726: I tensorflow/stream_executor/cuda/cuda_gpu_executor.cc:941] successful NUMA node read from SysFS had negative value (-1), but there must be at least one NUMA node, so returning NUMA node zero
2021-10-13 19:38:06.281150: I tensorflow/stream_executor/cuda/cuda_gpu_executor.cc:941] successful NUMA node read from SysFS had negative value (-1), but there must be at least one NUMA node, so returning NUMA node zero
2021-10-13 19:38:06.281525: W tensorflow/core/common_runtime/gpu/gpu_bfc_allocator.cc:39] Overriding allow_growth setting because the TF_FORCE_GPU_ALLOW_GROWTH environment variable is set. Original config value was 0.
2021-10-13 19:38:06.281567: I tensorflow/core/common_runtime/gpu/gpu_device.cc:1406] Created TensorFlow device (/job:localhost/replica:0/task:0/device:GPU:0 with 8332 MB memory) -> physical GPU (device: 0, name: Tesla T4, pci bus id: 0000:00:04.0, compute capability: 7.5)
2021-10-13 19:38:28.327706: I tensorflow/stream_executor/cuda/cuda_gpu_executor.cc:941] successful NUMA node read from SysFS had negative value (-1), but there must be at least one NUMA node, so returning NUMA node zero
2021-10-13 19:38:28.328193: I tensorflow/core/grappler/devices.cc:69] Number of eligible GPUs (core count >= 8, compute capability >= 0.0): 1
2021-10-13 19:38:28.328351: I tensorflow/core/grappler/clusters/single_machine.cc:356] Starting new session
2021-10-13 19:38:28.328613: I tensorflow/compiler/jit/xla_gpu_device.cc:99] Not creating XLA devices, tf_xla_enable_xla_devices not set
2021-10-13 19:38:28.328759: I tensorflow/stream_executor/cuda/cuda_gpu_executor.cc:941] successful NUMA node read from SysFS had negative value (-1), but there must be at least one NUMA node, so returning NUMA node zero
2021-10-13 19:38:28.329204: I tensorflow/core/common_runtime/gpu/gpu_device.cc:1720] Found device 0 with properties: 
pciBusID: 0000:00:04.0 name: Tesla T4 computeCapability: 7.5
coreClock: 1.59GHz coreCount: 40 deviceMemorySize: 14.75GiB deviceMemoryBandwidth: 298.08GiB/s
2021-10-13 19:38:28.329258: I tensorflow/stream_executor/platform/default/dso_loader.cc:49] Successfully opened dynamic library libcudart.so.11.0
2021-10-13 19:38:28.329298: I tensorflow/stream_executor/platform/default/dso_loader.cc:49] Successfully opened dynamic library libcublas.so.11
2021-10-13 19:38:28.329318: I tensorflow/stream_executor/platform/default/dso_loader.cc:49] Successfully opened dynamic library libcublasLt.so.11
2021-10-13 19:38:28.329338: I tensorflow/stream_executor/platform/default/dso_loader.cc:49] Successfully opened dynamic library libcufft.so.10
2021-10-13 19:38:28.329357: I tensorflow/stream_executor/platform/default/dso_loader.cc:49] Successfully opened dynamic library libcurand.so.10
2021-10-13 19:38:28.329377: I tensorflow/stream_executor/platform/default/dso_loader.cc:49] Successfully opened dynamic library libcusolver.so.10
2021-10-13 19:38:28.329396: I tensorflow/stream_executor/platform/default/dso_loader.cc:49] Successfully opened dynamic library libcusparse.so.11
2021-10-13 19:38:28.329414: I tensorflow/stream_executor/platform/default/dso_loader.cc:49] Successfully opened dynamic library libcudnn.so.8
2021-10-13 19:38:28.329481: I tensorflow/stream_executor/cuda/cuda_gpu_executor.cc:941] successful NUMA node read from SysFS had negative value (-1), but there must be at least one NUMA node, so returning NUMA node zero
2021-10-13 19:38:28.329909: I tensorflow/stream_executor/cuda/cuda_gpu_executor.cc:941] successful NUMA node read from SysFS had negative value (-1), but there must be at least one NUMA node, so returning NUMA node zero
2021-10-13 19:38:28.330300: I tensorflow/core/common_runtime/gpu/gpu_device.cc:1862] Adding visible gpu devices: 0
2021-10-13 19:38:28.330342: I tensorflow/core/common_runtime/gpu/gpu_device.cc:1261] Device interconnect StreamExecutor with strength 1 edge matrix:
2021-10-13 19:38:28.330356: I tensorflow/core/common_runtime/gpu/gpu_device.cc:1267]      0 
2021-10-13 19:38:28.330369: I tensorflow/core/common_runtime/gpu/gpu_device.cc:1280] 0:   N 
2021-10-13 19:38:28.330459: I tensorflow/stream_executor/cuda/cuda_gpu_executor.cc:941] successful NUMA node read from SysFS had negative value (-1), but there must be at least one NUMA node, so returning NUMA node zero
2021-10-13 19:38:28.330876: I tensorflow/stream_executor/cuda/cuda_gpu_executor.cc:941] successful NUMA node read from SysFS had negative value (-1), but there must be at least one NUMA node, so returning NUMA node zero
2021-10-13 19:38:28.331269: I tensorflow/core/common_runtime/gpu/gpu_device.cc:1406] Created TensorFlow device (/job:localhost/replica:0/task:0/device:GPU:0 with 8332 MB memory) -> physical GPU (device: 0, name: Tesla T4, pci bus id: 0000:00:04.0, compute capability: 7.5)
2021-10-13 19:38:28.331564: I tensorflow/core/platform/profile_utils/cpu_utils.cc:112] CPU Frequency: 2199995000 Hz
2021-10-13 19:38:28.559197: I tensorflow/core/grappler/optimizers/meta_optimizer.cc:928] Optimization results for grappler item: graph_to_optimize
  function_optimizer: Graph size after: 5886 nodes (4762), 8130 edges (6999), time = 102.673ms.
  function_optimizer: Graph size after: 5886 nodes (0), 8130 edges (0), time = 47.716ms.
Optimization results for grappler item: __inference_map_while_cond_21179_56692
  function_optimizer: function_optimizer did nothing. time = 0.002ms.
  function_optimizer: function_optimizer did nothing. time = 0ms.
Optimization results for grappler item: __inference_map_while_Preprocessor_ResizeToRange_cond_false_21221_48903
  function_optimizer: function_optimizer did nothing. time = 0.001ms.
  function_optimizer: function_optimizer did nothing. time = 0ms.
Optimization results for grappler item: __inference_map_while_Preprocessor_ResizeToRange_cond_true_21220_55035
  function_optimizer: function_optimizer did nothing. time = 0.002ms.
  function_optimizer: function_optimizer did nothing. time = 0ms.
Optimization results for grappler item: __inference_map_while_body_21180_55148
  function_optimizer: Graph size after: 115 nodes (0), 124 edges (0), time = 0.989ms.
  function_optimizer: Graph size after: 115 nodes (0), 124 edges (0), time = 0.988ms.

[ WARNING ] Model Optimizer removes pre-processing block of the model which resizes image keeping aspect ratio. The Inference Engine does not support dynamic image size so the Intermediate Representation file is generated with the input image size of a fixed size.
Specify the "--input_shape" command line parameter to override the default shape which is equal to (1333, 1333).
The Preprocessor block has been removed. Only nodes performing mean value subtraction and scaling (if applicable) are kept.
[ ERROR ]  Exception occurred during running replacer "ObjectDetectionAPIProposalReplacement" (<class 'extensions.front.tf.ObjectDetectionAPI.ObjectDetectionAPIProposalReplacement'>): The matched sub-graph contains network input node "input_tensor". 
 For more information please refer to Model Optimizer FAQ, question #75. (https://docs.openvinotoolkit.org/latest/openvino_docs_MO_DG_prepare_model_Model_Optimizer_FAQ.html?question=75#question-75)
Method 2 (with the --input_shape '(1333, 1333)')) giving another error
%cd /content/TensorFlow/workspace/training/exported-models/my_{MODEL}
!mkdir -p saved_model_10k
!source /opt/intel/openvino_2021/bin/setupvars.sh && \
    python /opt/intel/openvino_2021/deployment_tools/model_optimizer/mo.py \
    --saved_model_dir saved_model \
    --transformations_config /opt/intel/openvino_2021/deployment_tools/model_optimizer/extensions/front/tf/faster_rcnn_support_api_v2.4.json \
    --tensorflow_object_detection_api_pipeline_config pipeline.config \
    --input_checkpoint /content/TensorFlow/workspace/training/exported-models/my_{MODEL}/checkpoint \
    --reverse_input_channels \
    --output_dir saved_model_10k \
    --data_type FP16 \
    --input_shape '(1333, 1333)'
    
/content/TensorFlow/workspace/training/exported-models/my_faster_rcnn_inception_resnet_v2_1024x1024_coco17_tpu-8
[setupvars.sh] OpenVINO environment initialized
Model Optimizer arguments:
Common parameters:
	- Path to the Input Model: 	None
	- Path for generated IR: 	/content/TensorFlow/workspace/training/exported-models/my_faster_rcnn_inception_resnet_v2_1024x1024_coco17_tpu-8/saved_model_10k
	- IR output name: 	saved_model
	- Log level: 	ERROR
	- Batch: 	Not specified, inherited from the model
	- Input layers: 	Not specified, inherited from the model
	- Output layers: 	Not specified, inherited from the model
	- Input shapes: 	(1333, 1333)
	- Mean values: 	Not specified
	- Scale values: 	Not specified
	- Scale factor: 	Not specified
	- Precision of IR: 	FP16
	- Enable fusing: 	True
	- Enable grouped convolutions fusing: 	True
	- Move mean values to preprocess section: 	None
	- Reverse input channels: 	True
TensorFlow specific parameters:
	- Input model in text protobuf format: 	False
	- Path to model dump for TensorBoard: 	None
	- List of shared libraries with TensorFlow custom layers implementation: 	None
	- Update the configuration file with input/output node names: 	None
	- Use configuration file used to generate the model with Object Detection API: 	/content/TensorFlow/workspace/training/exported-models/my_faster_rcnn_inception_resnet_v2_1024x1024_coco17_tpu-8/pipeline.config
	- Use the config file: 	None
	- Inference Engine found in: 	/opt/intel/openvino_2021/python/python3.7/openvino
Inference Engine version: 	2021.4.1-3926-14e67d86634-releases/2021/4
Model Optimizer version: 	2021.4.1-3926-14e67d86634-releases/2021/4
2021-10-13 19:35:38.049142: I tensorflow/stream_executor/platform/default/dso_loader.cc:49] Successfully opened dynamic library libcudart.so.11.0
/usr/local/lib/python3.7/dist-packages/tensorflow/python/autograph/impl/api.py:22: DeprecationWarning: the imp module is deprecated in favour of importlib; see the module's documentation for alternative uses
  import imp
2021-10-13 19:35:40.752074: I tensorflow/compiler/jit/xla_cpu_device.cc:41] Not creating XLA devices, tf_xla_enable_xla_devices not set
2021-10-13 19:35:40.752920: I tensorflow/stream_executor/platform/default/dso_loader.cc:49] Successfully opened dynamic library libcuda.so.1
2021-10-13 19:35:40.758897: I tensorflow/stream_executor/cuda/cuda_gpu_executor.cc:941] successful NUMA node read from SysFS had negative value (-1), but there must be at least one NUMA node, so returning NUMA node zero
2021-10-13 19:35:40.759380: I tensorflow/core/common_runtime/gpu/gpu_device.cc:1720] Found device 0 with properties: 
pciBusID: 0000:00:04.0 name: Tesla T4 computeCapability: 7.5
coreClock: 1.59GHz coreCount: 40 deviceMemorySize: 14.75GiB deviceMemoryBandwidth: 298.08GiB/s
2021-10-13 19:35:40.759416: I tensorflow/stream_executor/platform/default/dso_loader.cc:49] Successfully opened dynamic library libcudart.so.11.0
2021-10-13 19:35:40.762423: I tensorflow/stream_executor/platform/default/dso_loader.cc:49] Successfully opened dynamic library libcublas.so.11
2021-10-13 19:35:40.762510: I tensorflow/stream_executor/platform/default/dso_loader.cc:49] Successfully opened dynamic library libcublasLt.so.11
2021-10-13 19:35:40.763603: I tensorflow/stream_executor/platform/default/dso_loader.cc:49] Successfully opened dynamic library libcufft.so.10
2021-10-13 19:35:40.763962: I tensorflow/stream_executor/platform/default/dso_loader.cc:49] Successfully opened dynamic library libcurand.so.10
2021-10-13 19:35:40.765973: I tensorflow/stream_executor/platform/default/dso_loader.cc:49] Successfully opened dynamic library libcusolver.so.10
2021-10-13 19:35:40.766773: I tensorflow/stream_executor/platform/default/dso_loader.cc:49] Successfully opened dynamic library libcusparse.so.11
2021-10-13 19:35:40.767033: I tensorflow/stream_executor/platform/default/dso_loader.cc:49] Successfully opened dynamic library libcudnn.so.8
2021-10-13 19:35:40.767148: I tensorflow/stream_executor/cuda/cuda_gpu_executor.cc:941] successful NUMA node read from SysFS had negative value (-1), but there must be at least one NUMA node, so returning NUMA node zero
2021-10-13 19:35:40.767607: I tensorflow/stream_executor/cuda/cuda_gpu_executor.cc:941] successful NUMA node read from SysFS had negative value (-1), but there must be at least one NUMA node, so returning NUMA node zero
2021-10-13 19:35:40.768020: I tensorflow/core/common_runtime/gpu/gpu_device.cc:1862] Adding visible gpu devices: 0
2021-10-13 19:35:40.768292: I tensorflow/core/platform/cpu_feature_guard.cc:142] This TensorFlow binary is optimized with oneAPI Deep Neural Network Library (oneDNN) to use the following CPU instructions in performance-critical operations:  AVX2 FMA
To enable them in other operations, rebuild TensorFlow with the appropriate compiler flags.
2021-10-13 19:35:40.768469: I tensorflow/compiler/jit/xla_gpu_device.cc:99] Not creating XLA devices, tf_xla_enable_xla_devices not set
2021-10-13 19:35:40.768579: I tensorflow/stream_executor/cuda/cuda_gpu_executor.cc:941] successful NUMA node read from SysFS had negative value (-1), but there must be at least one NUMA node, so returning NUMA node zero
2021-10-13 19:35:40.769004: I tensorflow/core/common_runtime/gpu/gpu_device.cc:1720] Found device 0 with properties: 
pciBusID: 0000:00:04.0 name: Tesla T4 computeCapability: 7.5
coreClock: 1.59GHz coreCount: 40 deviceMemorySize: 14.75GiB deviceMemoryBandwidth: 298.08GiB/s
2021-10-13 19:35:40.769050: I tensorflow/stream_executor/platform/default/dso_loader.cc:49] Successfully opened dynamic library libcudart.so.11.0
2021-10-13 19:35:40.769090: I tensorflow/stream_executor/platform/default/dso_loader.cc:49] Successfully opened dynamic library libcublas.so.11
2021-10-13 19:35:40.769115: I tensorflow/stream_executor/platform/default/dso_loader.cc:49] Successfully opened dynamic library libcublasLt.so.11
2021-10-13 19:35:40.769139: I tensorflow/stream_executor/platform/default/dso_loader.cc:49] Successfully opened dynamic library libcufft.so.10
2021-10-13 19:35:40.769165: I tensorflow/stream_executor/platform/default/dso_loader.cc:49] Successfully opened dynamic library libcurand.so.10
2021-10-13 19:35:40.769187: I tensorflow/stream_executor/platform/default/dso_loader.cc:49] Successfully opened dynamic library libcusolver.so.10
2021-10-13 19:35:40.769216: I tensorflow/stream_executor/platform/default/dso_loader.cc:49] Successfully opened dynamic library libcusparse.so.11
2021-10-13 19:35:40.769238: I tensorflow/stream_executor/platform/default/dso_loader.cc:49] Successfully opened dynamic library libcudnn.so.8
2021-10-13 19:35:40.769307: I tensorflow/stream_executor/cuda/cuda_gpu_executor.cc:941] successful NUMA node read from SysFS had negative value (-1), but there must be at least one NUMA node, so returning NUMA node zero
2021-10-13 19:35:40.769753: I tensorflow/stream_executor/cuda/cuda_gpu_executor.cc:941] successful NUMA node read from SysFS had negative value (-1), but there must be at least one NUMA node, so returning NUMA node zero
2021-10-13 19:35:40.770166: I tensorflow/core/common_runtime/gpu/gpu_device.cc:1862] Adding visible gpu devices: 0
2021-10-13 19:35:40.770220: I tensorflow/stream_executor/platform/default/dso_loader.cc:49] Successfully opened dynamic library libcudart.so.11.0
2021-10-13 19:35:41.442544: I tensorflow/core/common_runtime/gpu/gpu_device.cc:1261] Device interconnect StreamExecutor with strength 1 edge matrix:
2021-10-13 19:35:41.442596: I tensorflow/core/common_runtime/gpu/gpu_device.cc:1267]      0 
2021-10-13 19:35:41.442605: I tensorflow/core/common_runtime/gpu/gpu_device.cc:1280] 0:   N 
2021-10-13 19:35:41.442817: I tensorflow/stream_executor/cuda/cuda_gpu_executor.cc:941] successful NUMA node read from SysFS had negative value (-1), but there must be at least one NUMA node, so returning NUMA node zero
2021-10-13 19:35:41.443400: I tensorflow/stream_executor/cuda/cuda_gpu_executor.cc:941] successful NUMA node read from SysFS had negative value (-1), but there must be at least one NUMA node, so returning NUMA node zero
2021-10-13 19:35:41.443868: I tensorflow/stream_executor/cuda/cuda_gpu_executor.cc:941] successful NUMA node read from SysFS had negative value (-1), but there must be at least one NUMA node, so returning NUMA node zero
2021-10-13 19:35:41.444289: W tensorflow/core/common_runtime/gpu/gpu_bfc_allocator.cc:39] Overriding allow_growth setting because the TF_FORCE_GPU_ALLOW_GROWTH environment variable is set. Original config value was 0.
2021-10-13 19:35:41.444337: I tensorflow/core/common_runtime/gpu/gpu_device.cc:1406] Created TensorFlow device (/job:localhost/replica:0/task:0/device:GPU:0 with 8332 MB memory) -> physical GPU (device: 0, name: Tesla T4, pci bus id: 0000:00:04.0, compute capability: 7.5)
2021-10-13 19:36:03.590144: I tensorflow/stream_executor/cuda/cuda_gpu_executor.cc:941] successful NUMA node read from SysFS had negative value (-1), but there must be at least one NUMA node, so returning NUMA node zero
2021-10-13 19:36:03.590629: I tensorflow/core/grappler/devices.cc:69] Number of eligible GPUs (core count >= 8, compute capability >= 0.0): 1
2021-10-13 19:36:03.590778: I tensorflow/core/grappler/clusters/single_machine.cc:356] Starting new session
2021-10-13 19:36:03.591065: I tensorflow/compiler/jit/xla_gpu_device.cc:99] Not creating XLA devices, tf_xla_enable_xla_devices not set
2021-10-13 19:36:03.591229: I tensorflow/stream_executor/cuda/cuda_gpu_executor.cc:941] successful NUMA node read from SysFS had negative value (-1), but there must be at least one NUMA node, so returning NUMA node zero
2021-10-13 19:36:03.591676: I tensorflow/core/common_runtime/gpu/gpu_device.cc:1720] Found device 0 with properties: 
pciBusID: 0000:00:04.0 name: Tesla T4 computeCapability: 7.5
coreClock: 1.59GHz coreCount: 40 deviceMemorySize: 14.75GiB deviceMemoryBandwidth: 298.08GiB/s
2021-10-13 19:36:03.591730: I tensorflow/stream_executor/platform/default/dso_loader.cc:49] Successfully opened dynamic library libcudart.so.11.0
2021-10-13 19:36:03.591772: I tensorflow/stream_executor/platform/default/dso_loader.cc:49] Successfully opened dynamic library libcublas.so.11
2021-10-13 19:36:03.591798: I tensorflow/stream_executor/platform/default/dso_loader.cc:49] Successfully opened dynamic library libcublasLt.so.11
2021-10-13 19:36:03.591825: I tensorflow/stream_executor/platform/default/dso_loader.cc:49] Successfully opened dynamic library libcufft.so.10
2021-10-13 19:36:03.591851: I tensorflow/stream_executor/platform/default/dso_loader.cc:49] Successfully opened dynamic library libcurand.so.10
2021-10-13 19:36:03.591875: I tensorflow/stream_executor/platform/default/dso_loader.cc:49] Successfully opened dynamic library libcusolver.so.10
2021-10-13 19:36:03.591900: I tensorflow/stream_executor/platform/default/dso_loader.cc:49] Successfully opened dynamic library libcusparse.so.11
2021-10-13 19:36:03.591926: I tensorflow/stream_executor/platform/default/dso_loader.cc:49] Successfully opened dynamic library libcudnn.so.8
2021-10-13 19:36:03.592009: I tensorflow/stream_executor/cuda/cuda_gpu_executor.cc:941] successful NUMA node read from SysFS had negative value (-1), but there must be at least one NUMA node, so returning NUMA node zero
2021-10-13 19:36:03.592480: I tensorflow/stream_executor/cuda/cuda_gpu_executor.cc:941] successful NUMA node read from SysFS had negative value (-1), but there must be at least one NUMA node, so returning NUMA node zero
2021-10-13 19:36:03.592866: I tensorflow/core/common_runtime/gpu/gpu_device.cc:1862] Adding visible gpu devices: 0
2021-10-13 19:36:03.592913: I tensorflow/core/common_runtime/gpu/gpu_device.cc:1261] Device interconnect StreamExecutor with strength 1 edge matrix:
2021-10-13 19:36:03.592929: I tensorflow/core/common_runtime/gpu/gpu_device.cc:1267]      0 
2021-10-13 19:36:03.592946: I tensorflow/core/common_runtime/gpu/gpu_device.cc:1280] 0:   N 
2021-10-13 19:36:03.593071: I tensorflow/stream_executor/cuda/cuda_gpu_executor.cc:941] successful NUMA node read from SysFS had negative value (-1), but there must be at least one NUMA node, so returning NUMA node zero
2021-10-13 19:36:03.593511: I tensorflow/stream_executor/cuda/cuda_gpu_executor.cc:941] successful NUMA node read from SysFS had negative value (-1), but there must be at least one NUMA node, so returning NUMA node zero
2021-10-13 19:36:03.593914: I tensorflow/core/common_runtime/gpu/gpu_device.cc:1406] Created TensorFlow device (/job:localhost/replica:0/task:0/device:GPU:0 with 8332 MB memory) -> physical GPU (device: 0, name: Tesla T4, pci bus id: 0000:00:04.0, compute capability: 7.5)
2021-10-13 19:36:03.594228: I tensorflow/core/platform/profile_utils/cpu_utils.cc:112] CPU Frequency: 2199995000 Hz
2021-10-13 19:36:03.829736: I tensorflow/core/grappler/optimizers/meta_optimizer.cc:928] Optimization results for grappler item: graph_to_optimize
  function_optimizer: Graph size after: 5886 nodes (4762), 8130 edges (6999), time = 109.646ms.
  function_optimizer: Graph size after: 5886 nodes (0), 8130 edges (0), time = 48.319ms.
Optimization results for grappler item: __inference_map_while_cond_21179_56692
  function_optimizer: function_optimizer did nothing. time = 0.002ms.
  function_optimizer: function_optimizer did nothing. time = 0ms.
Optimization results for grappler item: __inference_map_while_Preprocessor_ResizeToRange_cond_false_21221_48903
  function_optimizer: function_optimizer did nothing. time = 0.001ms.
  function_optimizer: function_optimizer did nothing. time = 0ms.
Optimization results for grappler item: __inference_map_while_Preprocessor_ResizeToRange_cond_true_21220_55035
  function_optimizer: function_optimizer did nothing. time = 0.001ms.
  function_optimizer: function_optimizer did nothing. time = 0ms.
Optimization results for grappler item: __inference_map_while_body_21180_55148
  function_optimizer: Graph size after: 115 nodes (0), 124 edges (0), time = 1.037ms.
  function_optimizer: Graph size after: 115 nodes (0), 124 edges (0), time = 1.016ms.

[ ERROR ]  -------------------------------------------------
[ ERROR ]  ----------------- INTERNAL ERROR ----------------
[ ERROR ]  Unexpected exception happened.
[ ERROR ]  Please contact Model Optimizer developers and forward the following information:
[ ERROR ]  Exception occurred during running replacer "ObjectDetectionAPIPreprocessor2Replacement (<class 'extensions.front.tf.ObjectDetectionAPI.ObjectDetectionAPIPreprocessor2Replacement'>)": index 2 is out of bounds for axis 0 with size 2
[ ERROR ]  Traceback (most recent call last):
  File "/opt/intel/openvino_2021/deployment_tools/model_optimizer/mo/utils/class_registration.py", line 276, in apply_transform
    replacer.find_and_replace_pattern(graph)
  File "/opt/intel/openvino_2021/deployment_tools/model_optimizer/mo/front/tf/replacement.py", line 36, in find_and_replace_pattern
    self.transform_graph(graph, desc._replacement_desc['custom_attributes'])
  File "/opt/intel/openvino_2021/deployment_tools/model_optimizer/extensions/front/tf/ObjectDetectionAPI.py", line 702, in transform_graph
    update_parameter_shape(graph, None)
  File "/opt/intel/openvino_2021/deployment_tools/model_optimizer/extensions/front/tf/ObjectDetectionAPI.py", line 494, in update_parameter_shape
    height, width = calculate_placeholder_spatial_shape(graph, match, pipeline_config)
  File "/opt/intel/openvino_2021/deployment_tools/model_optimizer/extensions/front/tf/ObjectDetectionAPI.py", line 409, in calculate_placeholder_spatial_shape
    user_defined_width = user_defined_shape[2]
IndexError: index 2 is out of bounds for axis 0 with size 2

The above exception was the direct cause of the following exception:

Traceback (most recent call last):
  File "/opt/intel/openvino_2021.4.689/deployment_tools/model_optimizer/mo/main.py", line 394, in main
    ret_code = driver(argv)
  File "/opt/intel/openvino_2021.4.689/deployment_tools/model_optimizer/mo/main.py", line 356, in driver
    ret_res = emit_ir(prepare_ir(argv), argv)
  File "/opt/intel/openvino_2021.4.689/deployment_tools/model_optimizer/mo/main.py", line 252, in prepare_ir
    graph = unified_pipeline(argv)
  File "/opt/intel/openvino_2021/deployment_tools/model_optimizer/mo/pipeline/unified.py", line 17, in unified_pipeline
    class_registration.ClassType.BACK_REPLACER
  File "/opt/intel/openvino_2021/deployment_tools/model_optimizer/mo/utils/class_registration.py", line 328, in apply_replacements
    apply_replacements_list(graph, replacers_order)
  File "/opt/intel/openvino_2021/deployment_tools/model_optimizer/mo/utils/class_registration.py", line 318, in apply_replacements_list
    num_transforms=len(replacers_order))
  File "/opt/intel/openvino_2021/deployment_tools/model_optimizer/mo/utils/logger.py", line 111, in wrapper
    function(*args, **kwargs)
  File "/opt/intel/openvino_2021/deployment_tools/model_optimizer/mo/utils/class_registration.py", line 306, in apply_transform
    )) from err
Exception: Exception occurred during running replacer "ObjectDetectionAPIPreprocessor2Replacement (<class 'extensions.front.tf.ObjectDetectionAPI.ObjectDetectionAPIPreprocessor2Replacement'>)": index 2 is out of bounds for axis 0 with size 2

[ ERROR ]  ---------------- END OF BUG REPORT --------------
[ ERROR ]  -------------------------------------------------
@dorovl dorovl added bug Something isn't working support_request labels Oct 13, 2021
@Iffa-Intel Iffa-Intel removed the bug Something isn't working label Oct 15, 2021
@Iffa-Intel
Copy link

I'm getting the same behaviour. We'll look further into this.

mo

@Iffa-Intel Iffa-Intel added category: MO Model Optimizer PSE labels Oct 16, 2021
@jgespino jgespino self-assigned this Oct 19, 2021
@jgespino
Copy link
Contributor

Hi @dorovl

Could you try using the attached configuration file (remove .txt) with your model and the following command? I was able to convert to IR but didn't test as I am not sure what the model was trained on. Please give it a try and let me know if it works for you.

python3 /opt/intel/openvino_2021/deployment_tools/model_optimizer/mo_tf.py \
--saved_model_dir saved_model \
--transformations_config faster_rcnn_support_api_updated.json \
--tensorflow_object_detection_api_pipeline_config pipeline.config \
--reverse_input_channels

Regards,
Jesus

faster_rcnn_support_api_updated.json.txt

@dorovl
Copy link
Author

dorovl commented Oct 22, 2021

Hi Jesus @jgespino,

Using the updated transformation config parameters you provided me it works perfectly! Thank you!

%cd /content/TensorFlow/workspace/training/exported-models/my_{MODEL}
!mkdir -p saved_model_10k
!source /opt/intel/openvino_2021/bin/setupvars.sh && \
    python /opt/intel/openvino_2021/deployment_tools/model_optimizer/mo.py \
    --saved_model_dir saved_model \
    --transformations_config /content/faster_rcnn_support_api_updated.json \
    --tensorflow_object_detection_api_pipeline_config pipeline.config \
    --input_checkpoint /content/TensorFlow/workspace/training/exported-models/my_{MODEL}/checkpoint \
    --reverse_input_channels \
    --output_dir saved_model_10k \
    --data_type FP16
    
/content/TensorFlow/workspace/training/exported-models/my_faster_rcnn_inception_resnet_v2_1024x1024_coco17_tpu-8
[setupvars.sh] OpenVINO environment initialized
Model Optimizer arguments:
Common parameters:
	- Path to the Input Model: 	None
	- Path for generated IR: 	/content/TensorFlow/workspace/training/exported-models/my_faster_rcnn_inception_resnet_v2_1024x1024_coco17_tpu-8/saved_model_10k
	- IR output name: 	saved_model
	- Log level: 	ERROR
	- Batch: 	Not specified, inherited from the model
	- Input layers: 	Not specified, inherited from the model
	- Output layers: 	Not specified, inherited from the model
	- Input shapes: 	Not specified, inherited from the model
	- Mean values: 	Not specified
	- Scale values: 	Not specified
	- Scale factor: 	Not specified
	- Precision of IR: 	FP16
	- Enable fusing: 	True
	- Enable grouped convolutions fusing: 	True
	- Move mean values to preprocess section: 	None
	- Reverse input channels: 	True
TensorFlow specific parameters:
	- Input model in text protobuf format: 	False
	- Path to model dump for TensorBoard: 	None
	- List of shared libraries with TensorFlow custom layers implementation: 	None
	- Update the configuration file with input/output node names: 	None
	- Use configuration file used to generate the model with Object Detection API: 	/content/TensorFlow/workspace/training/exported-models/my_faster_rcnn_inception_resnet_v2_1024x1024_coco17_tpu-8/pipeline.config
	- Use the config file: 	None
	- Inference Engine found in: 	/opt/intel/openvino_2021/python/python3.7/openvino
Inference Engine version: 	2021.4.1-3926-14e67d86634-releases/2021/4
Model Optimizer version: 	2021.4.1-3926-14e67d86634-releases/2021/4
/usr/local/lib/python3.7/dist-packages/tensorflow/python/autograph/impl/api.py:22: DeprecationWarning: the imp module is deprecated in favour of importlib; see the module's documentation for alternative uses
  import imp
[ WARNING ]  
Detected not satisfied dependencies:
	tensorflow: installed: 2.6.0, required: ~= 2.4.1

Please install required versions of components or use install_prerequisites script
/opt/intel/openvino_2021.4.689/deployment_tools/model_optimizer/install_prerequisites/install_prerequisites_tf2.sh
Note that install_prerequisites scripts may install additional components.
[ WARNING ] Model Optimizer removes pre-processing block of the model which resizes image keeping aspect ratio. The Inference Engine does not support dynamic image size so the Intermediate Representation file is generated with the input image size of a fixed size.
Specify the "--input_shape" command line parameter to override the default shape which is equal to (1333, 1333).
The Preprocessor block has been removed. Only nodes performing mean value subtraction and scaling (if applicable) are kept.
The graph output nodes have been replaced with a single layer of type "DetectionOutput". Refer to the operation set specification documentation for more information about the operation.
[ WARNING ]  Network has 2 inputs overall, but only 1 of them are suitable for input channels reversing.
Suitable for input channel reversing inputs are 4-dimensional with 3 channels
All inputs: {'input_tensor': [1, 3, 1333, 1333], 'image_info': [1, 3]}
Suitable inputs {'input_tensor': [1, 3, 1333, 1333]}
[ SUCCESS ] Generated IR version 10 model.
[ SUCCESS ] XML file: /content/TensorFlow/workspace/training/exported-models/my_faster_rcnn_inception_resnet_v2_1024x1024_coco17_tpu-8/saved_model_10k/saved_model.xml
[ SUCCESS ] BIN file: /content/TensorFlow/workspace/training/exported-models/my_faster_rcnn_inception_resnet_v2_1024x1024_coco17_tpu-8/saved_model_10k/saved_model.bin
[ SUCCESS ] Total execution time: 131.42 seconds. 
[ SUCCESS ] Memory consumed: 5154 MB. 
It's been a while, check for a new version of Intel(R) Distribution of OpenVINO(TM) toolkit here https://software.intel.com/content/www/us/en/develop/tools/openvino-toolkit/download.html?cid=other&source=prod&campid=ww_2021_bu_IOTG_OpenVINO-2021-4-LTS&content=upg_all&medium=organic or on the GitHub*

Regards,
Vlad

@dorovl dorovl closed this as completed Oct 22, 2021
@dorovl
Copy link
Author

dorovl commented Oct 22, 2021

It might be useful to add these configuration parameters as standard.

@dacquaviva
Copy link

I am having the same issue (same error Exception occurred during running replacer "ObjectDetectionAPIProposalReplacement" (<class 'extensions.front.tf.ObjectDetectionAPI.ObjectDetectionAPIProposalReplacement'>): The matched sub-graph contains network input node "input_tensor". ) however using the ssd_mobilenet_v2_keras rather then Faster R-CNN Inception ResNet V2 1024x1024. I looked into the faster_rcnn_support_api_updated.json uploaded by @jgespino and compared it with the original faster_rcnn_support_api_v2.4.json, I notice that the only difference is this missing line "coordinates_swap_method": "swap_weights" and therefore I tried to replicate the same approach with the ssd_support_api_v2.4.json (adding the missing line). As a result I was able to convert the model to openvino. My question now is, why is this missing line not part of the original ssd_support_api_v2.4.json? and is this the right approach to solve the issue?
Thanks to whoever will clary it.

@jgespino
Copy link
Contributor

jgespino commented Nov 2, 2021

Hi @dacquaviva

I will check with the development team on this. What version of OpenVINO are you comparing the faster_rcnn_support_api_v2.4.json?

I compared the config file included in the latest OpenVINO 2021.4.1 release and both include "coordinates_swap_method": "swap_weights".

The difference I see is modifying the custom attributes and start points by adding the following
"operation_to_add": "Proposal",
"StatefulPartitionedCall/Cast",

and removing
"StatefulPartitionedCall/Cast_2",

Regards,
Jesus

@jgespino jgespino reopened this Nov 2, 2021
@dacquaviva
Copy link

dacquaviva commented Nov 2, 2021

Hi @jgespino

Thanks for answering.

I was trying to convert the tensorflow Object detection API ssd_mobilenet_v2_320x320_coco17_tpu-8 however I am not able to.

I am using the openvino/ubuntu20_dev docker and I get the following error:

/opt/intel/openvino/deployment_tools/model_optimizer/mo_tf.py --saved_model_dir ./saved_model --transformations_config /opt/intel/openvino/deployment_tools/model_optimizer/extensions/front/tf/ssd_support_api_v2.4.json --tensorflow_object_detection_api_pipeline_config ./pipeline.config --input_shape [1,320,320,3]

Model Optimizer arguments:
Common parameters:
- Path to the Input Model: None
- Path for generated IR: /src/.
- IR output name: saved_model
- Log level: ERROR
- Batch: Not specified, inherited from the model
- Input layers: Not specified, inherited from the model
- Output layers: Not specified, inherited from the model
- Input shapes: [1,320,320,3]
- Mean values: Not specified
- Scale values: Not specified
- Scale factor: Not specified
- Precision of IR: FP32
- Enable fusing: True
- Enable grouped convolutions fusing: True
- Move mean values to preprocess section: None
- Reverse input channels: False
TensorFlow specific parameters:
- Input model in text protobuf format: False
- Path to model dump for TensorBoard: None
- List of shared libraries with TensorFlow custom layers implementation: None
- Update the configuration file with input/output node names: None
- Use configuration file used to generate the model with Object Detection API: /src/./pipeline.config
- Use the config file: None
- Inference Engine found in: /opt/intel/openvino/python/python3.8/openvino
Inference Engine version: 2021.4.1-3926-14e67d86634-releases/2021/4
Model Optimizer version: 2021.4.1-3926-14e67d86634-releases/2021/4
2021-11-02 12:17:36.535640: W tensorflow/stream_executor/platform/default/dso_loader.cc:60] Could not load dynamic library 'libcudart.so.11.0'; dlerror: libcudart.so.11.0: cannot open shared object file: No such file or directory; LD_LIBRARY_PATH: /opt/intel/openvino/opencv/lib:/opt/intel/openvino/deployment_tools/ngraph/lib:/opt/intel/openvino/deployment_tools/inference_engine/external/tbb/lib::/opt/intel/openvino/deployment_tools/inference_engine/external/hddl/lib:/opt/intel/openvino/deployment_tools/inference_engine/external/omp/lib:/opt/intel/openvino/deployment_tools/inference_engine/external/gna/lib:/opt/intel/openvino/deployment_tools/inference_engine/external/mkltiny_lnx/lib:/opt/intel/openvino/deployment_tools/inference_engine/lib/intel64:/opt/intel/openvino/opencv/lib:/opt/intel/openvino/deployment_tools/ngraph/lib:/opt/intel/openvino/deployment_tools/inference_engine/external/tbb/lib::/opt/intel/openvino/deployment_tools/inference_engine/external/hddl/lib:/opt/intel/openvino/deployment_tools/inference_engine/external/omp/lib:/opt/intel/openvino/deployment_tools/inference_engine/external/gna/lib:/opt/intel/openvino/deployment_tools/inference_engine/external/mkltiny_lnx/lib:/opt/intel/openvino/deployment_tools/inference_engine/lib/intel64
2021-11-02 12:17:36.535711: I tensorflow/stream_executor/cuda/cudart_stub.cc:29] Ignore above cudart dlerror if you do not have a GPU set up on your machine.
/usr/local/lib/python3.8/dist-packages/tensorflow/python/autograph/impl/api.py:22: DeprecationWarning: the imp module is deprecated in favour of importlib; see the module's documentation for alternative uses
import imp
2021-11-02 12:17:39.133727: I tensorflow/compiler/jit/xla_cpu_device.cc:41] Not creating XLA devices, tf_xla_enable_xla_devices not set
2021-11-02 12:17:39.137180: W tensorflow/stream_executor/platform/default/dso_loader.cc:60] Could not load dynamic library 'libcuda.so.1'; dlerror: libcuda.so.1: cannot open shared object file: No such file or directory; LD_LIBRARY_PATH: /opt/intel/openvino/opencv/lib:/opt/intel/openvino/deployment_tools/ngraph/lib:/opt/intel/openvino/deployment_tools/inference_engine/external/tbb/lib::/opt/intel/openvino/deployment_tools/inference_engine/external/hddl/lib:/opt/intel/openvino/deployment_tools/inference_engine/external/omp/lib:/opt/intel/openvino/deployment_tools/inference_engine/external/gna/lib:/opt/intel/openvino/deployment_tools/inference_engine/external/mkltiny_lnx/lib:/opt/intel/openvino/deployment_tools/inference_engine/lib/intel64:/opt/intel/openvino/opencv/lib:/opt/intel/openvino/deployment_tools/ngraph/lib:/opt/intel/openvino/deployment_tools/inference_engine/external/tbb/lib::/opt/intel/openvino/deployment_tools/inference_engine/external/hddl/lib:/opt/intel/openvino/deployment_tools/inference_engine/external/omp/lib:/opt/intel/openvino/deployment_tools/inference_engine/external/gna/lib:/opt/intel/openvino/deployment_tools/inference_engine/external/mkltiny_lnx/lib:/opt/intel/openvino/deployment_tools/inference_engine/lib/intel64
2021-11-02 12:17:39.137389: W tensorflow/stream_executor/cuda/cuda_driver.cc:326] failed call to cuInit: UNKNOWN ERROR (303)
2021-11-02 12:17:39.137498: I tensorflow/stream_executor/cuda/cuda_diagnostics.cc:156] kernel driver does not appear to be running on this host (549a7cd861ef): /proc/driver/nvidia/version does not exist
2021-11-02 12:17:39.138108: I tensorflow/core/platform/cpu_feature_guard.cc:142] This TensorFlow binary is optimized with oneAPI Deep Neural Network Library (oneDNN) to use the following CPU instructions in performance-critical operations: AVX2 FMA
To enable them in other operations, rebuild TensorFlow with the appropriate compiler flags.
2021-11-02 12:17:39.142885: I tensorflow/compiler/jit/xla_gpu_device.cc:99] Not creating XLA devices, tf_xla_enable_xla_devices not set
2021-11-02 12:17:49.496198: I tensorflow/core/grappler/devices.cc:69] Number of eligible GPUs (core count >= 8, compute capability >= 0.0): 0
2021-11-02 12:17:49.497025: I tensorflow/core/grappler/clusters/single_machine.cc:356] Starting new session
2021-11-02 12:17:49.497649: I tensorflow/compiler/jit/xla_gpu_device.cc:99] Not creating XLA devices, tf_xla_enable_xla_devices not set
2021-11-02 12:17:49.500593: I tensorflow/core/platform/profile_utils/cpu_utils.cc:112] CPU Frequency: 2712005000 Hz
2021-11-02 12:17:49.706963: I tensorflow/core/grappler/optimizers/meta_optimizer.cc:928] Optimization results for grappler item: graph_to_optimize
function_optimizer: Graph size after: 2624 nodes (2290), 2856 edges (2515), time = 70.622ms.
function_optimizer: Graph size after: 2624 nodes (0), 2856 edges (0), time = 32.236ms.
Optimization results for grappler item: __inference_map_while_body_7527_15976
function_optimizer: function_optimizer did nothing. time = 0.002ms.
function_optimizer: function_optimizer did nothing. time = 0ms.
Optimization results for grappler item: __inference_map_while_cond_7526_16191
function_optimizer: function_optimizer did nothing. time = 0.001ms.
function_optimizer: function_optimizer did nothing. time = 0ms.
Optimization results for grappler item: __inference_Postprocessor_BatchMultiClassNonMaxSuppression_map_while_body_10147_17645
function_optimizer: function_optimizer did nothing. time = 0.002ms.
function_optimizer: function_optimizer did nothing. time = 0.001ms.
Optimization results for grappler item: __inference_Postprocessor_BatchMultiClassNonMaxSuppression_map_while_cond_10146_5704
function_optimizer: function_optimizer did nothing. time = 0.001ms.
function_optimizer: function_optimizer did nothing. time = 0ms.

[ ERROR ] Exception occurred during running replacer "ObjectDetectionAPIPreprocessor2Replacement" (<class 'extensions.front.tf.ObjectDetectionAPI.ObjectDetectionAPIPreprocessor2Replacement'>): The matched sub-graph contains network input node "input_tensor".
For more information please refer to Model Optimizer FAQ, question #75. (https://docs.openvinotoolkit.org/latest/openvino_docs_MO_DG_prepare_model_Model_Optimizer_FAQ.html?question=75#question-75)

I have then edit the config file adding to line 20 "coordinates_swap_method": "swap_weights" and the model does convert, however when I try to make inference with it the predictions are wrong so I suppose it is not the right approach. There is already an open issue similar to mine here.

I believe it's something related to the tensorflow model, probably version issue too recent.

@jgespino
Copy link
Contributor

jgespino commented Nov 2, 2021

Hi @dacquaviva

Please add your model and error to the other issue you mentioned.

Regarding the faster_rcnn_support_api_updated.json, this is not added to the OpenVINO package as it was modified to match a custom trained model. We only include configuration files for models posted on the Tensorflow Object Detection repository.

Regards,
Jesus

@jgespino jgespino closed this as completed Nov 2, 2021
@jgespino
Copy link
Contributor

Hi @dacquaviva

It seems the other issue may have been deleted, if you still need help please open a new issue with your model and error message.

Regards,
Jesus

@dacquaviva
Copy link

Hi @jgespino

yeah still stuck there, thanks to notify me. I created a new issue with my error message and steps to reproduce it.

Regards
Daniele

Sign up for free to join this conversation on GitHub. Already have an account? Sign in to comment
Projects
None yet
Development

No branches or pull requests

4 participants