Skip to content
New issue

Have a question about this project? Sign up for a free GitHub account to open an issue and contact its maintainers and the community.

By clicking “Sign up for GitHub”, you agree to our terms of service and privacy statement. We’ll occasionally send you account related emails.

Already on GitHub? Sign in to your account

ERROR : While Quantization of OpenVINO model #1631

Open
ashish-2005 opened this issue Mar 12, 2023 · 8 comments
Open

ERROR : While Quantization of OpenVINO model #1631

ashish-2005 opened this issue Mar 12, 2023 · 8 comments
Assignees

Comments

@ashish-2005
Copy link

Hii
I am getting a KeyError while the quantization process of OpenVINO model
My openVINO model is made of IR of a tensorflow model (pre-trained from tensorflow-hub)

ERROR

[/usr/local/lib/python3.9/dist-packages/networkx/classes/reportviews.py](https://localhost:8080/#) in __getitem__(self, n)
    191                 f"try list(G.nodes)[{n.start}:{n.stop}:{n.step}]"
    192             )
--> 193         return self._nodes[n]
    194 
    195     # Set methods

KeyError: 0
  1. Transformation Function
def transform_fn(data_item):
    """
    Create an image tensor for the quantization process

    Parameters:
        data_item : tensorflow dataset object
    
    Returns:
        img(tf.tensor) : tensor of image in shape (1,512,512,3)
    """
    data_item = pathlib.Path(data_item.numpy().decode('UTF8'))
    img = preprocess_image(data_item) 
    return img
  1. Quantization code
# creating dataset for accuracy testing
dataloader = tf.data.Dataset.list_files('coco/images/val2017/*.jpg',shuffle=False)  # itterable Data Object
quantization_dataset = nncf.Dataset(dataloader, transform_fn)

# loading original openvino-model for quantization
OpenVino_model = core.read_model(model='IR/efficientdet.xml',weights='IR/efficientdet.bin')

quantized_model = nncf.quantize(OpenVino_model, quantization_dataset)  

I have tried reinstalling nncf with tensorflow pip install nncf[tensorflow2] and still no luck.

@vshampor
Copy link
Contributor

Greetings, @ashish-2005!

Please provide a complete stack trace of your error and perhaps more complete reproduction code (including the way to obtain the model .xml/.bin files) so that we could help you out here.

@ashish-2005
Copy link
Author

Hii @vshampor
Thanks for your reply
I have mentioned the code and error below

  1. Saving model
MODEL_DIR_PATH = pathlib.Path("SavedModel")
MODEL_DIR_PATH.mkdir(exist_ok=True)

tf.saved_model.save(model,str(MODEL_DIR_PATH))
  1. Obtaining IR (.xml/.bin)
from openvino.tools import mo
from openvino.runtime import serialize

model_ir = mo.convert_model(saved_model_dir=str(MODEL_DIR_PATH))
serialize(model_ir,'IR/efficientdet.xml')
  1. Transformation Function
def transform_fn(data_item):
    """
    Create an image tensor for the quantization process

    Parameters:
        data_item : tensorflow dataset object
    
    Returns:
        img(tf.tensor) : tensor of image in shape (1,512,512,3)
    """
    data_item = pathlib.Path(data_item.numpy().decode('UTF8'))
    img = preprocess_image(data_item) 
    return img
  1. Quantization code
# creating dataset for accuracy testing
dataloader = tf.data.Dataset.list_files('coco/images/val2017/*.jpg',shuffle=False)  # itterable Data Object
quantization_dataset = nncf.Dataset(dataloader, transform_fn)

# loading original openvino-model for quantization
OpenVino_model = core.read_model(model='IR/efficientdet.xml',weights='IR/efficientdet.bin')

quantized_model = nncf.quantize(OpenVino_model, quantization_dataset)  

Whole Error

INFO:openvino.tools.pot.pipeline.pipeline:Inference Engine version:                2022.3.0-9052-9752fafe8eb-releases/2022/3
INFO:openvino.tools.pot.pipeline.pipeline:Model Optimizer version:                 2022.3.0-9052-9752fafe8eb-releases/2022/3
INFO:openvino.tools.pot.pipeline.pipeline:Post-Training Optimization Tool version: 2022.3.0-9052-9752fafe8eb-releases/2022/3
INFO:openvino.tools.pot.statistics.collector:Start computing statistics for algorithms : DefaultQuantization
---------------------------------------------------------------------------
KeyError                                  Traceback (most recent call last)
<ipython-input-101-f798bbe7bfc6> in <module>
      1 # take long to execute , took me around 15 minutes
----> 2 quantized_model = nncf.quantize(OpenVino_model, quantization_dataset)
      3 # specify `preset` in nncf.quantize() for better result , Default optimizer will be used otherwise

21 frames
/usr/local/lib/python3.9/dist-packages/nncf/quantization/quantize.py in quantize(model, calibration_dataset, preset, target_device, subset_size, fast_bias_correction, model_type, ignored_scope)
     63     if backend == BackendType.OPENVINO:
     64         from nncf.openvino.quantization.quantize import quantize_impl
---> 65         return quantize_impl(model, calibration_dataset, preset, target_device, subset_size,
     66                              fast_bias_correction, model_type, ignored_scope)
     67 

/usr/local/lib/python3.9/dist-packages/nncf/telemetry/decorator.py in wrapped(*args, **kwargs)
     68                                                  event_value=event.int_data)
     69 
---> 70                 retval = fn(*args, **kwargs)
     71 
     72                 if category is not None and category != previous_category:

/usr/local/lib/python3.9/dist-packages/nncf/openvino/quantization/quantize.py in quantize_impl(model, calibration_dataset, preset, target_device, subset_size, fast_bias_correction, model_type, ignored_scope)
    138     engine = OVEngine(engine_config, calibration_dataset, calibration_dataset)
    139     pipeline = pot.create_pipeline(algorithms, engine)
--> 140     compressed_model = pipeline.run(pot_model)
    141     pot.compress_model_weights(compressed_model)
    142     quantized_model = _convert_compressed_model_to_openvino_model(compressed_model)

/usr/local/lib/python3.9/dist-packages/openvino/tools/pot/pipeline/pipeline.py in run(self, model)
     50                 current_algo_seq = []
     51 
---> 52         result = self.collect_statistics_and_run(model, current_algo_seq)
     53         logger.update_progress(self._algorithms_steps)
     54         return result

/usr/local/lib/python3.9/dist-packages/openvino/tools/pot/pipeline/pipeline.py in collect_statistics_and_run(self, model, algo_seq)
     56     def collect_statistics_and_run(self, model, algo_seq):
     57         # Collect statistics for activations
---> 58         collect_statistics(self._engine, model, algo_seq)
     59 
     60         for algo in algo_seq:

/usr/local/lib/python3.9/dist-packages/openvino/tools/pot/statistics/collector.py in collect_statistics(engine, model, algo_seq)
    138 
    139     for algo in algo_seq:
--> 140         algo.register_statistics(model, stats_collector)
    141 
    142     stats_collector.compute_statistics(model)

/usr/local/lib/python3.9/dist-packages/openvino/tools/pot/algorithms/quantization/default/algorithm.py in register_statistics(self, model, stats_collector)
    129             minmax_bc_collector = StatisticsCollector(self._engine)
    130         for algo in self.algorithms[1:]:
--> 131             algo.register_statistics(model, minmax_bc_collector)

/usr/local/lib/python3.9/dist-packages/openvino/tools/pot/algorithms/quantization/minmax/algorithm.py in register_statistics(self, model, stats_collector)
     62     def register_statistics(self, model, stats_collector):
     63         model = deepcopy(model)
---> 64         fqut.insert_fake_quantize_nodes(self._config, model)
     65         activation_statistics_layout = self.__get_activations_statistics_layout(model)
     66         layers_mapping = fqut.create_renamed_layers_mapping(model, activation_statistics_layout)

/usr/local/lib/python3.9/dist-packages/openvino/tools/pot/algorithms/quantization/fake_quantize.py in insert_fake_quantize_nodes(config, model, qscheme)
    186                 ignored_params['scope'].append(key)
    187 
--> 188     GraphTransformer(hardware_config).insert_fake_quantize(model, ignored_params)
    189 
    190 

/usr/local/lib/python3.9/dist-packages/openvino/tools/pot/graph/transformer.py in insert_fake_quantize(self, model, ignored_params)
     85             self.fq_insertion.ignored_params = ignored_params_[model_dict['name']] if model.is_cascade \
     86                 else ignored_params_
---> 87             for_graph_and_each_sub_graph_recursively(model_dict['model'], self._insert_fake_quantize)
     88             add_fullname_for_nodes(model_dict['model'])
     89         return model

/usr/local/lib/python3.9/dist-packages/openvino/tools/mo/middle/pattern_match.py in for_graph_and_each_sub_graph_recursively(graph, func)
     44 def for_graph_and_each_sub_graph_recursively(graph: Graph, func: callable):
     45     """ Run a given function `func` for a given graph `graph` and each sub-graph recursively. """
---> 46     func(graph)
     47     for_each_sub_graph_recursively(graph, func)
     48 

/usr/local/lib/python3.9/dist-packages/openvino/tools/pot/graph/transformer.py in _insert_fake_quantize(self, graph)
     63         graph.clean_up()
     64 
---> 65         self.fq_propagation.find_and_replace_pattern(graph)
     66         graph.clean_up()
     67 

/usr/local/lib/python3.9/dist-packages/openvino/tools/pot/graph/passes.py in find_and_replace_pattern(self, graph)
    317 
    318             # Check that input type is allowed from jumping over
--> 319             m_op = find_operation_matches(self.quantize_agnostic_operations, input_node)
    320             is_scaleshift = output_type == 'Multiply' and nu.get_node_output(output_node, 0)[0].type == 'Add'
    321             if len(m_op) > 1:

/usr/local/lib/python3.9/dist-packages/openvino/tools/pot/graph/utils.py in find_operation_matches(src_ops, dst_ops)
    106     for src_op in src_ops:
    107         for dst_op in dst_ops:
--> 108             if operations_matched(src_op, dst_op):
    109                 result.append((src_op, dst_op))
    110     return result

/usr/local/lib/python3.9/dist-packages/openvino/tools/pot/graph/utils.py in operations_matched(src_op, dst_op)
     93 
     94     ie_src_op = convert_mo_to_ie_operation(src_op) if ie_in_src_op else src_op
---> 95     ie_dst_op = convert_mo_to_ie_operation(dst_op) if ie_in_dst_op else dst_op
     96     return match_attrs(ie_src_op, ie_dst_op)
     97 

/usr/local/lib/python3.9/dist-packages/openvino/tools/pot/graph/utils.py in convert_mo_to_ie_operation(op)
     71         return ie_mo_naming
     72 
---> 73     return process_list(op, op['IE'])
     74 
     75 

/usr/local/lib/python3.9/dist-packages/openvino/tools/pot/graph/utils.py in process_list(op, value)
     62                         ie_mo_naming_loc[item[0]] = op[item[1]]
     63                 else:
---> 64                     ie_mo_naming_loc.update(process_list(op, item))
     65             elif isinstance(item, str) and item in op and op[item] is not None:
     66                 ie_mo_naming_loc[item] = op[item]

/usr/local/lib/python3.9/dist-packages/openvino/tools/pot/graph/utils.py in process_list(op, value)
     53                 ie_mo_naming_loc = ie_mo_naming['attributes']
     54             if isinstance(item, list):
---> 55                 ie_mo_naming_loc.update(process_list(op, item))
     56             elif isinstance(item, tuple):
     57                 if len(item) == 2 and isinstance(item[0], str) and \

/usr/local/lib/python3.9/dist-packages/openvino/tools/pot/graph/utils.py in process_list(op, value)
     57                 if len(item) == 2 and isinstance(item[0], str) and \
     58                         (callable(item[1]) or isinstance(item[1], str)):
---> 59                     if callable(item[1]) and item[1](op) is not None:
     60                         ie_mo_naming_loc[item[0]] = item[1](op)
     61                     elif item[1] in op and op[item[1]] is not None:

/usr/local/lib/python3.9/dist-packages/openvino/tools/mo/ops/If.py in <lambda>(node)
    296             'IE': [(
    297                 'layer',
--> 298                 [('id', lambda node: self.re_numerate_internal_id_and_get_if_id(node)), 'name', 'type', 'version'],
    299                 [
    300                     '@ports',

/usr/local/lib/python3.9/dist-packages/openvino/tools/mo/ops/If.py in re_numerate_internal_id_and_get_if_id(if_node)
    280         then_graph_nodes = if_node.then_graph.nodes()
    281         for idx in range(len(if_node.then_graph.get_op_nodes())):
--> 282             then_graph_nodes[idx]['internal_layer_id'] = idx
    283         else_graph_nodes = if_node.else_graph.nodes()
    284         for idx in range(len(if_node.else_graph.get_op_nodes())):

/usr/local/lib/python3.9/dist-packages/networkx/classes/reportviews.py in __getitem__(self, n)
    191                 f"try list(G.nodes)[{n.start}:{n.stop}:{n.step}]"
    192             )
--> 193         return self._nodes[n]
    194 
    195     # Set methods

KeyError: 0

Regards

@vshampor
Copy link
Contributor

@KodiaqQ take a look, please

@KodiaqQ
Copy link
Collaborator

KodiaqQ commented Mar 16, 2023

Hello, @ashish-2005!
Thank you for your contribution!
May I ask you to provide a .xml file with the affected model? Or maybe a repo with the affected model (TF), or the code of the model architecture.
It's needed for debugging purposes and to localize the problem.

@ashish-2005
Copy link
Author

ashish-2005 commented Mar 16, 2023

Hii @KodiaqQ
Thanks for your reply

I have attached the model .xml file (efficientdet.zip)
As for model architecture, I am not sure of that as I am using a pre-trained model from TensorFlow-hub called EfficientDet-d0 which i first saved in TF SavedModel Format then converted to IR with openVINO python API

Regards

@alexsu52
Copy link
Contributor

Hello @ashish-2005,

Thanks for provided model. We are working on the fix of the original issue.

I can kindly offer to use the experimental implementation of quantization based on using OpenVINO NGraph Python API as workaround. It will replace the current implementation in the future. The following code demonstrates how to call the experimental implementation of quantization:

from nncf.experimental.openvino_native.quantization.quantize import quantize_impl

ov_model = ov.Core().read_model('path_to_fp32_model')
ov_model.reshape({0: [1, 512, 512, 3]}) # model reshape is required

dataloader = tf.data.Dataset.list_files('path_to_dataset',shuffle=False)

def transform_fn(data_item):
    data_item = pathlib.Path(data_item.numpy().decode('UTF8'))
    img = preprocess_image(data_item) 
    return img

quantization_dataset = nncf.Dataset(dataloader, transform_fn)

quantized_model = quantize_impl(ov_model, quantization_dataset, 
                                preset=nncf.QuantizationPreset.PERFORMANCE,
                                target_device=nncf.TargetDevice.ANY,
                                subset_size=300,
                                fast_bias_correction=True)

Please, use NNCF from the source code with the following fixes #1665, #1633 and the latest OpenVINO from the master branch.

alexsu52 added a commit that referenced this issue Mar 28, 2023
### Changes

The linear squeeze activation pattern was added

### Reason for changes

Align with the OpenVINO runtime

### Related tickets

#1631

### Tests

N/A
@MaximProshin
Copy link
Collaborator

Dear @ashish-2005 ,

All the mentioned above fixes have been merged to recent NNCF 2.5.0 release. Have you tried it? Is your problem still valid?

@ashish-2005
Copy link
Author

Hii @MaximProshin

Thanks for reply, I haven't tried it yet but will try and inform you soon.

thanks
ashish

Sign up for free to join this conversation on GitHub. Already have an account? Sign in to comment
Labels
None yet
Projects
None yet
Development

No branches or pull requests

5 participants