Skip to content
New issue

Have a question about this project? Sign up for a free GitHub account to open an issue and contact its maintainers and the community.

By clicking “Sign up for GitHub”, you agree to our terms of service and privacy statement. We’ll occasionally send you account related emails.

Already on GitHub? Sign in to your account

Add clarification for bias quantization in QlinearConv Op spec #2464

Merged
merged 4 commits into from
Dec 5, 2019

Conversation

askhade
Copy link
Contributor

@askhade askhade commented Nov 18, 2019

Add clarification for bias quantization.

This change should not need version update because for QlinearConv op to produce correct result there is only one way to quantize bias. Current PR only add clear documentation on how bias should be quantized.

@askhade askhade requested a review from a team as a code owner November 18, 2019 21:04
@askhade
Copy link
Contributor Author

askhade commented Nov 18, 2019

@linkerzhang , @fdwr : Add the clarification to the spec as discussed

Copy link
Member

@linkerzhang linkerzhang left a comment

Choose a reason for hiding this comment

The reason will be displayed to describe this comment to others. Learn more.

Thank you very much for the clarification!

@linkerzhang linkerzhang merged commit ad1f556 into onnx:master Dec 5, 2019
fdokic pushed a commit to deep500/onnext that referenced this pull request Dec 18, 2019
fdokic added a commit to deep500/onnext that referenced this pull request Dec 18, 2019
* Update Argmin/Argmax (onnx#2461)

* update argmin/argmax

* gen doc and tests

* fix typo

* update doc

* Updated with correct URL to LICENSE (onnx#2468)

Updated license to https://github.com/onnx/onnx/blob/master/LICENSE

* remove workshop update since it is done (onnx#2460)

* doc: fix some typos at ONNXIFI (onnx#2473)

* Add remove operator and function requirements to the add new op doc. (onnx#2486)

* Add clarification for bias quantization in QlinearConv Op spec (onnx#2464)

* fix the optimize pass of fuse_consecutive_transposes (onnx#2471)

* fix the optimize pass of fuse_consecutive_transposes

* update the bound checker

* make sure the graph and input/output tensors are consistent before and after optimization

* Minor correction type (onnx#2411)

* correct typeerror

* value data type correction

* little value

* Edited PythonAPIOverview.md (onnx#2491)

* Edited PythonAPIOverview.md

The example given in the "Creating an ONNX Model Using Helper
Functions" example.

* Edited PythonAPIOverview.md

The example given in the "Creating an ONNX Model Using Helper
Functions" example was not working as expected.

Running the given code would throw a ValidationError() regarding
the node specification (i.e.: "Context: Bad node spec").

This change uses the current node specification, solving the issue.

* python_out does not recognize dllexport_decl. (onnx#2482)

* add partial_update_inputs_outputs_dims()

* Update update_model_dims.py
@winnietsang winnietsang added this to the 1.7 milestone Feb 12, 2020
jcwchen pushed a commit to jcwchen/onnx that referenced this pull request Sep 23, 2020
Sign up for free to join this conversation on GitHub. Already have an account? Sign in to comment
Labels
None yet
Projects
None yet
Development

Successfully merging this pull request may close these issues.

None yet

3 participants