Skip to content
New issue

Have a question about this project? Sign up for a free GitHub account to open an issue and contact its maintainers and the community.

By clicking “Sign up for GitHub”, you agree to our terms of service and privacy statement. We’ll occasionally send you account related emails.

Already on GitHub? Sign in to your account

onnxruntime quantization weights not tied #21277

Open
inisis opened this issue Jul 8, 2024 · 6 comments
Open

onnxruntime quantization weights not tied #21277

inisis opened this issue Jul 8, 2024 · 6 comments
Assignees
Labels
quantization issues related to quantization

Comments

@inisis
Copy link
Contributor

inisis commented Jul 8, 2024

Describe the issue

I have a model with tied weights, but after quantization, one branch is replaced with quantized weights, but another still remains the float weights.

image

1720408498654

1720408409892

To reproduce

from onnxruntime.quantization import quantize_dynamic, QuantType

model_fp32 = './decoder_model_merged_slim.onnx'
model_quant = './decoder_model_merged_slim_quantized.onnx'
quantized_model = quantize_dynamic(
    model_input=model_fp32,
    model_output=model_quant,
    weight_type=QuantType.QInt8,
    extra_options={'EnableSubgraph': True},
    per_channel=False,
    reduce_range=False,
)

Urgency

No response

Platform

Linux

OS Version

ubuntu 2004

ONNX Runtime Installation

Released Package

ONNX Runtime Version or Commit ID

1.18.1

ONNX Runtime API

Python

Architecture

X64

Execution Provider

Default CPU

Execution Provider Library Version

No response

@github-actions github-actions bot added the quantization issues related to quantization label Jul 8, 2024
@yufenglee
Copy link
Member

You can try running the quantization preprocess and then call the quantization script. It should resolve the issue:

@inisis
Copy link
Contributor Author

inisis commented Jul 9, 2024

python -m onnxruntime.quantization.preprocess --input decoder_model_merged_slim.onnx --output decoder_model_merged_slim_op.onnx

by using preprocess, it raised an error

Traceback (most recent call last):
  File "/root/miniconda3/lib/python3.9/runpy.py", line 197, in _run_module_as_main
    return _run_code(code, main_globals, None,
  File "/root/miniconda3/lib/python3.9/runpy.py", line 87, in _run_code
    exec(code, run_globals)
  File "/root/miniconda3/lib/python3.9/site-packages/onnxruntime/quantization/preprocess.py", line 127, in <module>
    quant_pre_process(
  File "/root/miniconda3/lib/python3.9/site-packages/onnxruntime/quantization/shape_inference.py", line 81, in quant_pre_process
    model = SymbolicShapeInference.infer_shapes(
  File "/root/miniconda3/lib/python3.9/site-packages/onnxruntime/tools/symbolic_shape_infer.py", line 2908, in infer_shapes
    all_shapes_inferred = symbolic_shape_inference._infer_impl()
  File "/root/miniconda3/lib/python3.9/site-packages/onnxruntime/tools/symbolic_shape_infer.py", line 2672, in _infer_impl
    self.dispatcher_[node.op_type](node)
  File "/root/miniconda3/lib/python3.9/site-packages/onnxruntime/tools/symbolic_shape_infer.py", line 1187, in _infer_If
    self._fuse_tensor_type(node, i_out, vi.type, subgraph.output[i_out].type)
  File "/root/miniconda3/lib/python3.9/site-packages/onnxruntime/tools/symbolic_shape_infer.py", line 804, in _fuse_tensor_type
    dst_type.sequence_type.elem_type.tensor_type if is_sequence(dst_type) else dst_type.tensor_type
  File "/root/miniconda3/lib/python3.9/site-packages/onnxruntime/tools/symbolic_shape_infer.py", line 32, in is_sequence
    assert cls_type in ["tensor_type", "sequence_type"]
AssertionError

@inisis
Copy link
Contributor Author

inisis commented Jul 11, 2024

Can you please check this @tianleiwu

@tianleiwu
Copy link
Contributor

@yufenglee, please look at the quantization tool issue.

@yufenglee
Copy link
Member

The symbolic_shape_infer fails. You can disable the shape inference with option: --skip_symbolic_shape

@inisis
Copy link
Contributor Author

inisis commented Jul 12, 2024

python -m onnxruntime.quantization.preprocess --input decoder_model_merged_slim.onnx --output decoder_model_merged_slim_op.onnx --skip_symbolic_shape True

after using this, the model size increase from 113MB to 265MB, this is not expected.

Sign up for free to join this conversation on GitHub. Already have an account? Sign in to comment
Labels
quantization issues related to quantization
Projects
None yet
Development

No branches or pull requests

3 participants