-
Notifications
You must be signed in to change notification settings - Fork 282
Closed
Description
Hi, I have tried two versions of neural-compressor
to quantized the same palm detection model from OpenCV Model Zoo. One is version 1.10.1
and another is 1.13.1
.
I use the same quantized code:
model = onnx.load(self.model_path)
quantizer = Quantization(self.config_path)
quantizer.calib_dataloader = common.DataLoader(self.custom_dataset)
quantizer.model = common.Model(model)
q_model = quantizer()
q_model.save(output_name)
And the same config file:
version: 1.0
model: # mandatory. used to specify model specific information.
name: mp_palmdet
framework: onnxrt_qlinearops # mandatory. supported values are tensorflow, pytorch, pytorch_ipex, onnxrt_integer, onnxrt_qlinear or mxnet; allow new framework backend extension.
quantization: # optional. tuning constraints on model-wise for advance user to reduce tuning space.
approach: post_training_static_quant # optional. default value is post_training_static_quant.
calibration:
dataloader:
batch_size: 1
dataset:
dummy:
shape: [1, 256, 256, 3]
low: -1.0
high: 1.0
dtype: float32
label: True
tuning:
accuracy_criterion:
relative: 0.02 # optional. default value is relative, other value is absolute. this example allows relative accuracy loss: 1%.
exit_policy:
timeout: 0 # optional. tuning timeout (seconds). default value is 0 which means early stop. combine with max_trials field to decide when to exit.
random_seed: 9527 # optional. random seed for deterministic tuning.
It is weird that the old version of the quantized model can be run, but the new version of the quantized model cannot be run with the script demo.py
from opencv_zoo/models/palm_detection_mediapipe.
Actually, I don't know why this happened. Is there any update to the relevant interface after the version update?
Thank you for your help!
Metadata
Metadata
Assignees
Labels
No labels