Skip to content
This repository was archived by the owner on Jun 3, 2025. It is now read-only.

Conversation

@bfineran
Copy link
Contributor

@bfineran bfineran commented Apr 5, 2022

ensures that if a module is quantized with quantize_conv_activations=False the new ONNX export conversion for ConvInteger will be run. Maintains backwards compatibility for existing recipes/exports

@bfineran bfineran requested a review from a team April 5, 2022 03:12
@bfineran bfineran self-assigned this Apr 5, 2022
@bfineran bfineran requested review from InquestGeronimo, KSGulin and natuan and removed request for a team April 5, 2022 03:12
Copy link
Member

@rahul-tuli rahul-tuli left a comment

Choose a reason for hiding this comment

The reason will be displayed to describe this comment to others. Learn more.

@anmarques anmarques merged commit 4b3f664 into quant-refactor-conversion Apr 7, 2022
@anmarques anmarques deleted the quant-refactor-export branch April 7, 2022 14:15
bfineran added a commit that referenced this pull request Apr 8, 2022
* ConvInteger quantization conversion for quant refactor

* [quantization-refactor] mark/propagate conv export mode (#672)

* batch norm fold with existing bias param bug fix
anmarques added a commit that referenced this pull request Apr 8, 2022
* Removed output quantization from conv layers

* Added _Add_ReLU module that enables QATWrapper for quantizaiton.

* Removed quantization of output for linear and conv layers by default. Removed fusing of BN and ReLU by default.

* Minor fixes. Style and quality fixes.

* Added support to freezing bn stats.

* Added mode argument to wrapping of train function in BNWrapper

* Set BN fusing back as default.

* Set BN fusing back as default.

* Fixed custom freeze_bn_stats.

* Temporary files for evaluating changes to graphs.

* Added support to tensorrt flag. Moved the computation of quantization range to get_qat_config_config where it has full information about data type.

* Added support to TensorRT quantization

* Included check to account for when weight_qconfig_kwatgs is None.

* Removed output quantization from conv layers

* Added _Add_ReLU module that enables QATWrapper for quantizaiton.

* Removed quantization of output for linear and conv layers by default. Removed fusing of BN and ReLU by default.

* Minor fixes. Style and quality fixes.

* Added support to freezing bn stats.

* Added mode argument to wrapping of train function in BNWrapper

* Set BN fusing back as default.

* Set BN fusing back as default.

* Fixed custom freeze_bn_stats.

* Temporary files for evaluating changes to graphs.

* Added support to tensorrt flag. Moved the computation of quantization range to get_qat_config_config where it has full information about data type.

* Added support to TensorRT quantization

* Included check to account for when weight_qconfig_kwatgs is None.

* Modified argument names for backwards compatibility.

* Updated documentation to reflect changes.

* Updated documentation to reflect changes.

* Updated documentation to reflect changes.

* Fixed default weights data type.

* Style and quality fixes.

* Removed unused method

* Removed output quantization from conv layers

* Added _Add_ReLU module that enables QATWrapper for quantizaiton.

* Removed quantization of output for linear and conv layers by default. Removed fusing of BN and ReLU by default.

* Minor fixes. Style and quality fixes.

* Added support to freezing bn stats.

* Added mode argument to wrapping of train function in BNWrapper

* Set BN fusing back as default.

* Set BN fusing back as default.

* Fixed custom freeze_bn_stats.

* Temporary files for evaluating changes to graphs.

* Added support to tensorrt flag. Moved the computation of quantization range to get_qat_config_config where it has full information about data type.

* Added support to TensorRT quantization

* Included check to account for when weight_qconfig_kwatgs is None.

* Removed output quantization from conv layers

* Added _Add_ReLU module that enables QATWrapper for quantization.

* Removed quantization of output for linear and conv layers by default. Removed fusing of BN and ReLU by default.

* Minor fixes. Style and quality fixes.

* Added support to freezing bn stats.

* Added mode argument to wrapping of train function in BNWrapper

* Set BN fusing back as default.

* Set BN fusing back as default.

* Fixed custom freeze_bn_stats.

* Temporary files for evaluating changes to graphs.

* Added support to tensorrt flag. Moved the computation of quantization range to get_qat_config_config where it has full information about data type.

* Added support to TensorRT quantization

* Included check to account for when weight_qconfig_kwatgs is None.

* Modified argument names for backwards compatibility.

* Updated documentation to reflect changes.

* Updated documentation to reflect changes.

* Updated documentation to reflect changes.

* Fixed default weights data type.

* Style and quality fixes.

* Removed unused method

* Removed output quantization from conv layers

* Added _Add_ReLU module that enables QATWrapper for quantizaiton.

* Removed quantization of output for linear and conv layers by default. Removed fusing of BN and ReLU by default.

* Minor fixes. Style and quality fixes.

* Added support to freezing bn stats.

* Added mode argument to wrapping of train function in BNWrapper

* Set BN fusing back as default.

* Set BN fusing back as default.

* Fixed custom freeze_bn_stats.

* Temporary files for evaluating changes to graphs.

* Added support to tensorrt flag. Moved the computation of quantization range to get_qat_config_config where it has full information about data type.

* Added support to TensorRT quantization

* Included check to account for when weight_qconfig_kwatgs is None.

* Removed output quantization from conv layers

* Added _Add_ReLU module that enables QATWrapper for quantization.

* Removed quantization of output for linear and conv layers by default. Removed fusing of BN and ReLU by default.

* Minor fixes. Style and quality fixes.

* Added support to freezing bn stats.

* Added mode argument to wrapping of train function in BNWrapper

* Set BN fusing back as default.

* Set BN fusing back as default.

* Fixed custom freeze_bn_stats.

* Temporary files for evaluating changes to graphs.

* Added support to tensorrt flag. Moved the computation of quantization range to get_qat_config_config where it has full information about data type.

* Added support to TensorRT quantization

* Included check to account for when weight_qconfig_kwatgs is None.

* Modified argument names for backwards compatibility.

* Updated documentation to reflect changes.

* Updated documentation to reflect changes.

* Updated documentation to reflect changes.

* Fixed default weights data type.

* Style and quality fixes.

* Removed unused method

* Removed output quantization from conv layers

* Added _Add_ReLU module that enables QATWrapper for quantizaiton.

* Removed quantization of output for linear and conv layers by default. Removed fusing of BN and ReLU by default.

* Minor fixes. Style and quality fixes.

* Added support to freezing bn stats.

* Added mode argument to wrapping of train function in BNWrapper

* Set BN fusing back as default.

* Set BN fusing back as default.

* Fixed custom freeze_bn_stats.

* Temporary files for evaluating changes to graphs.

* Added support to tensorrt flag. Moved the computation of quantization range to get_qat_config_config where it has full information about data type.

* Added support to TensorRT quantization

* Included check to account for when weight_qconfig_kwatgs is None.

* Removed output quantization from conv layers

* Added _Add_ReLU module that enables QATWrapper for quantizaiton.

* Removed quantization of output for linear and conv layers by default. Removed fusing of BN and ReLU by default.

* Minor fixes. Style and quality fixes.

* Added support to freezing bn stats.

* Added mode argument to wrapping of train function in BNWrapper

* Set BN fusing back as default.

* Set BN fusing back as default.

* Fixed custom freeze_bn_stats.

* Temporary files for evaluating changes to graphs.

* Added support to tensorrt flag. Moved the computation of quantization range to get_qat_config_config where it has full information about data type.

* Added support to TensorRT quantization

* Included check to account for when weight_qconfig_kwatgs is None.

* Modified argument names for backwards compatibility.

* Updated documentation to reflect changes.

* Updated documentation to reflect changes.

* Updated documentation to reflect changes.

* Fixed default weights data type.

* Style and quality fixes.

* Removed unused method

* Removed testing files

* Style and quality fixes.

* Changed call to get_qat_qconfig to not specify symmetry and data type arguments for default case.

* Removed output quantization from conv layers

* Added _Add_ReLU module that enables QATWrapper for quantizaiton.

* Removed quantization of output for linear and conv layers by default. Removed fusing of BN and ReLU by default.

* Minor fixes. Style and quality fixes.

* Added support to freezing bn stats.

* Added mode argument to wrapping of train function in BNWrapper

* Set BN fusing back as default.

* Set BN fusing back as default.

* Fixed custom freeze_bn_stats.

* Temporary files for evaluating changes to graphs.

* Added support to tensorrt flag. Moved the computation of quantization range to get_qat_config_config where it has full information about data type.

* Added support to TensorRT quantization

* Included check to account for when weight_qconfig_kwatgs is None.

* Removed output quantization from conv layers

* Added _Add_ReLU module that enables QATWrapper for quantization.

* Removed quantization of output for linear and conv layers by default. Removed fusing of BN and ReLU by default.

* Minor fixes. Style and quality fixes.

* Added support to freezing bn stats.

* Added mode argument to wrapping of train function in BNWrapper

* Set BN fusing back as default.

* Set BN fusing back as default.

* Fixed custom freeze_bn_stats.

* Temporary files for evaluating changes to graphs.

* Added support to tensorrt flag. Moved the computation of quantization range to get_qat_config_config where it has full information about data type.

* Added support to TensorRT quantization

* Included check to account for when weight_qconfig_kwatgs is None.

* Modified argument names for backwards compatibility.

* Updated documentation to reflect changes.

* Updated documentation to reflect changes.

* Updated documentation to reflect changes.

* Fixed default weights data type.

* Style and quality fixes.

* Removed unused method

* Removed output quantization from conv layers

* Added _Add_ReLU module that enables QATWrapper for quantizaiton.

* Removed quantization of output for linear and conv layers by default. Removed fusing of BN and ReLU by default.

* Minor fixes. Style and quality fixes.

* Added support to freezing bn stats.

* Added mode argument to wrapping of train function in BNWrapper

* Set BN fusing back as default.

* Set BN fusing back as default.

* Fixed custom freeze_bn_stats.

* Temporary files for evaluating changes to graphs.

* Added support to tensorrt flag. Moved the computation of quantization range to get_qat_config_config where it has full information about data type.

* Added support to TensorRT quantization

* Included check to account for when weight_qconfig_kwatgs is None.

* Removed output quantization from conv layers

* Added _Add_ReLU module that enables QATWrapper for quantizaiton.

* Removed quantization of output for linear and conv layers by default. Removed fusing of BN and ReLU by default.

* Minor fixes. Style and quality fixes.

* Added support to freezing bn stats.

* Added mode argument to wrapping of train function in BNWrapper

* Set BN fusing back as default.

* Set BN fusing back as default.

* Fixed custom freeze_bn_stats.

* Temporary files for evaluating changes to graphs.

* Added support to tensorrt flag. Moved the computation of quantization range to get_qat_config_config where it has full information about data type.

* Added support to TensorRT quantization

* Included check to account for when weight_qconfig_kwatgs is None.

* Modified argument names for backwards compatibility.

* Updated documentation to reflect changes.

* Updated documentation to reflect changes.

* Updated documentation to reflect changes.

* Fixed default weights data type.

* Style and quality fixes.

* Removed unused method

* Removed testing files

* Style and quality fixes.

* Changed call to get_qat_qconfig to not specify symmetry and data type arguments for default case.

* Changed default number of activation and weight bits from None to 8.

* Revert "Changed default number of activation and weight bits from None to 8."

This reverts commit 95e966ed929fa3512331a73667d5ba2ac3d594b1.

* Revert "Changed call to get_qat_qconfig to not specify symmetry and data type arguments for default case."

This reverts commit a675813.

* Lumped qconfig properties into a dataclass.

* Lumped qconfig properties into a dataclass.

* Lumped qconfig properties into a dataclass.

* Resetting conv and linear activation flags to True.

* Renamed class BNWrapper as _BNWrapper.

* Added logging messages for when tensorrt forces overriding of configs.

* Style and quality fixes.

* ConvInteger quantization conversion for quant refactor (#644)

* ConvInteger quantization conversion for quant refactor

* [quantization-refactor] mark/propagate conv export mode (#672)

* batch norm fold with existing bias param bug fix

* Quantization Refactor Tests (#685)

* rebase import fix

* update manager serialization test cases for new quantization params

Co-authored-by: Benjamin Fineran <bfineran@users.noreply.github.com>
Co-authored-by: spacemanidol <dcampos3@illinois.edu>
Co-authored-by: Benjamin <ben@neuralmagic.com>
dbogunowicz pushed a commit that referenced this pull request Apr 11, 2022
* Removed output quantization from conv layers

* Added _Add_ReLU module that enables QATWrapper for quantizaiton.

* Removed quantization of output for linear and conv layers by default. Removed fusing of BN and ReLU by default.

* Minor fixes. Style and quality fixes.

* Added support to freezing bn stats.

* Added mode argument to wrapping of train function in BNWrapper

* Set BN fusing back as default.

* Set BN fusing back as default.

* Fixed custom freeze_bn_stats.

* Temporary files for evaluating changes to graphs.

* Added support to tensorrt flag. Moved the computation of quantization range to get_qat_config_config where it has full information about data type.

* Added support to TensorRT quantization

* Included check to account for when weight_qconfig_kwatgs is None.

* Removed output quantization from conv layers

* Added _Add_ReLU module that enables QATWrapper for quantizaiton.

* Removed quantization of output for linear and conv layers by default. Removed fusing of BN and ReLU by default.

* Minor fixes. Style and quality fixes.

* Added support to freezing bn stats.

* Added mode argument to wrapping of train function in BNWrapper

* Set BN fusing back as default.

* Set BN fusing back as default.

* Fixed custom freeze_bn_stats.

* Temporary files for evaluating changes to graphs.

* Added support to tensorrt flag. Moved the computation of quantization range to get_qat_config_config where it has full information about data type.

* Added support to TensorRT quantization

* Included check to account for when weight_qconfig_kwatgs is None.

* Modified argument names for backwards compatibility.

* Updated documentation to reflect changes.

* Updated documentation to reflect changes.

* Updated documentation to reflect changes.

* Fixed default weights data type.

* Style and quality fixes.

* Removed unused method

* Removed output quantization from conv layers

* Added _Add_ReLU module that enables QATWrapper for quantizaiton.

* Removed quantization of output for linear and conv layers by default. Removed fusing of BN and ReLU by default.

* Minor fixes. Style and quality fixes.

* Added support to freezing bn stats.

* Added mode argument to wrapping of train function in BNWrapper

* Set BN fusing back as default.

* Set BN fusing back as default.

* Fixed custom freeze_bn_stats.

* Temporary files for evaluating changes to graphs.

* Added support to tensorrt flag. Moved the computation of quantization range to get_qat_config_config where it has full information about data type.

* Added support to TensorRT quantization

* Included check to account for when weight_qconfig_kwatgs is None.

* Removed output quantization from conv layers

* Added _Add_ReLU module that enables QATWrapper for quantization.

* Removed quantization of output for linear and conv layers by default. Removed fusing of BN and ReLU by default.

* Minor fixes. Style and quality fixes.

* Added support to freezing bn stats.

* Added mode argument to wrapping of train function in BNWrapper

* Set BN fusing back as default.

* Set BN fusing back as default.

* Fixed custom freeze_bn_stats.

* Temporary files for evaluating changes to graphs.

* Added support to tensorrt flag. Moved the computation of quantization range to get_qat_config_config where it has full information about data type.

* Added support to TensorRT quantization

* Included check to account for when weight_qconfig_kwatgs is None.

* Modified argument names for backwards compatibility.

* Updated documentation to reflect changes.

* Updated documentation to reflect changes.

* Updated documentation to reflect changes.

* Fixed default weights data type.

* Style and quality fixes.

* Removed unused method

* Removed output quantization from conv layers

* Added _Add_ReLU module that enables QATWrapper for quantizaiton.

* Removed quantization of output for linear and conv layers by default. Removed fusing of BN and ReLU by default.

* Minor fixes. Style and quality fixes.

* Added support to freezing bn stats.

* Added mode argument to wrapping of train function in BNWrapper

* Set BN fusing back as default.

* Set BN fusing back as default.

* Fixed custom freeze_bn_stats.

* Temporary files for evaluating changes to graphs.

* Added support to tensorrt flag. Moved the computation of quantization range to get_qat_config_config where it has full information about data type.

* Added support to TensorRT quantization

* Included check to account for when weight_qconfig_kwatgs is None.

* Removed output quantization from conv layers

* Added _Add_ReLU module that enables QATWrapper for quantization.

* Removed quantization of output for linear and conv layers by default. Removed fusing of BN and ReLU by default.

* Minor fixes. Style and quality fixes.

* Added support to freezing bn stats.

* Added mode argument to wrapping of train function in BNWrapper

* Set BN fusing back as default.

* Set BN fusing back as default.

* Fixed custom freeze_bn_stats.

* Temporary files for evaluating changes to graphs.

* Added support to tensorrt flag. Moved the computation of quantization range to get_qat_config_config where it has full information about data type.

* Added support to TensorRT quantization

* Included check to account for when weight_qconfig_kwatgs is None.

* Modified argument names for backwards compatibility.

* Updated documentation to reflect changes.

* Updated documentation to reflect changes.

* Updated documentation to reflect changes.

* Fixed default weights data type.

* Style and quality fixes.

* Removed unused method

* Removed output quantization from conv layers

* Added _Add_ReLU module that enables QATWrapper for quantizaiton.

* Removed quantization of output for linear and conv layers by default. Removed fusing of BN and ReLU by default.

* Minor fixes. Style and quality fixes.

* Added support to freezing bn stats.

* Added mode argument to wrapping of train function in BNWrapper

* Set BN fusing back as default.

* Set BN fusing back as default.

* Fixed custom freeze_bn_stats.

* Temporary files for evaluating changes to graphs.

* Added support to tensorrt flag. Moved the computation of quantization range to get_qat_config_config where it has full information about data type.

* Added support to TensorRT quantization

* Included check to account for when weight_qconfig_kwatgs is None.

* Removed output quantization from conv layers

* Added _Add_ReLU module that enables QATWrapper for quantizaiton.

* Removed quantization of output for linear and conv layers by default. Removed fusing of BN and ReLU by default.

* Minor fixes. Style and quality fixes.

* Added support to freezing bn stats.

* Added mode argument to wrapping of train function in BNWrapper

* Set BN fusing back as default.

* Set BN fusing back as default.

* Fixed custom freeze_bn_stats.

* Temporary files for evaluating changes to graphs.

* Added support to tensorrt flag. Moved the computation of quantization range to get_qat_config_config where it has full information about data type.

* Added support to TensorRT quantization

* Included check to account for when weight_qconfig_kwatgs is None.

* Modified argument names for backwards compatibility.

* Updated documentation to reflect changes.

* Updated documentation to reflect changes.

* Updated documentation to reflect changes.

* Fixed default weights data type.

* Style and quality fixes.

* Removed unused method

* Removed testing files

* Style and quality fixes.

* Changed call to get_qat_qconfig to not specify symmetry and data type arguments for default case.

* Removed output quantization from conv layers

* Added _Add_ReLU module that enables QATWrapper for quantizaiton.

* Removed quantization of output for linear and conv layers by default. Removed fusing of BN and ReLU by default.

* Minor fixes. Style and quality fixes.

* Added support to freezing bn stats.

* Added mode argument to wrapping of train function in BNWrapper

* Set BN fusing back as default.

* Set BN fusing back as default.

* Fixed custom freeze_bn_stats.

* Temporary files for evaluating changes to graphs.

* Added support to tensorrt flag. Moved the computation of quantization range to get_qat_config_config where it has full information about data type.

* Added support to TensorRT quantization

* Included check to account for when weight_qconfig_kwatgs is None.

* Removed output quantization from conv layers

* Added _Add_ReLU module that enables QATWrapper for quantization.

* Removed quantization of output for linear and conv layers by default. Removed fusing of BN and ReLU by default.

* Minor fixes. Style and quality fixes.

* Added support to freezing bn stats.

* Added mode argument to wrapping of train function in BNWrapper

* Set BN fusing back as default.

* Set BN fusing back as default.

* Fixed custom freeze_bn_stats.

* Temporary files for evaluating changes to graphs.

* Added support to tensorrt flag. Moved the computation of quantization range to get_qat_config_config where it has full information about data type.

* Added support to TensorRT quantization

* Included check to account for when weight_qconfig_kwatgs is None.

* Modified argument names for backwards compatibility.

* Updated documentation to reflect changes.

* Updated documentation to reflect changes.

* Updated documentation to reflect changes.

* Fixed default weights data type.

* Style and quality fixes.

* Removed unused method

* Removed output quantization from conv layers

* Added _Add_ReLU module that enables QATWrapper for quantizaiton.

* Removed quantization of output for linear and conv layers by default. Removed fusing of BN and ReLU by default.

* Minor fixes. Style and quality fixes.

* Added support to freezing bn stats.

* Added mode argument to wrapping of train function in BNWrapper

* Set BN fusing back as default.

* Set BN fusing back as default.

* Fixed custom freeze_bn_stats.

* Temporary files for evaluating changes to graphs.

* Added support to tensorrt flag. Moved the computation of quantization range to get_qat_config_config where it has full information about data type.

* Added support to TensorRT quantization

* Included check to account for when weight_qconfig_kwatgs is None.

* Removed output quantization from conv layers

* Added _Add_ReLU module that enables QATWrapper for quantizaiton.

* Removed quantization of output for linear and conv layers by default. Removed fusing of BN and ReLU by default.

* Minor fixes. Style and quality fixes.

* Added support to freezing bn stats.

* Added mode argument to wrapping of train function in BNWrapper

* Set BN fusing back as default.

* Set BN fusing back as default.

* Fixed custom freeze_bn_stats.

* Temporary files for evaluating changes to graphs.

* Added support to tensorrt flag. Moved the computation of quantization range to get_qat_config_config where it has full information about data type.

* Added support to TensorRT quantization

* Included check to account for when weight_qconfig_kwatgs is None.

* Modified argument names for backwards compatibility.

* Updated documentation to reflect changes.

* Updated documentation to reflect changes.

* Updated documentation to reflect changes.

* Fixed default weights data type.

* Style and quality fixes.

* Removed unused method

* Removed testing files

* Style and quality fixes.

* Changed call to get_qat_qconfig to not specify symmetry and data type arguments for default case.

* Changed default number of activation and weight bits from None to 8.

* Revert "Changed default number of activation and weight bits from None to 8."

This reverts commit 95e966ed929fa3512331a73667d5ba2ac3d594b1.

* Revert "Changed call to get_qat_qconfig to not specify symmetry and data type arguments for default case."

This reverts commit a675813.

* Lumped qconfig properties into a dataclass.

* Lumped qconfig properties into a dataclass.

* Lumped qconfig properties into a dataclass.

* Resetting conv and linear activation flags to True.

* Renamed class BNWrapper as _BNWrapper.

* Added logging messages for when tensorrt forces overriding of configs.

* Style and quality fixes.

* ConvInteger quantization conversion for quant refactor (#644)

* ConvInteger quantization conversion for quant refactor

* [quantization-refactor] mark/propagate conv export mode (#672)

* batch norm fold with existing bias param bug fix

* Quantization Refactor Tests (#685)

* rebase import fix

* update manager serialization test cases for new quantization params

Co-authored-by: Benjamin Fineran <bfineran@users.noreply.github.com>
Co-authored-by: spacemanidol <dcampos3@illinois.edu>
Co-authored-by: Benjamin <ben@neuralmagic.com>
Sign up for free to subscribe to this conversation on GitHub. Already have an account? Sign in.

Labels

None yet

Projects

None yet

Development

Successfully merging this pull request may close these issues.

4 participants