New issue
Have a question about this project? Sign up for a free GitHub account to open an issue and contact its maintainers and the community.
By clicking “Sign up for GitHub”, you agree to our terms of service and privacy statement. We’ll occasionally send you account related emails.
Already on GitHub? Sign in to your account
[quant] Compute scale and zero point automatically in testing::_quantize #46232
Conversation
[ghstack-poisoned]
…ing::_quantize" Differential Revision: [D24271061](https://our.internmc.facebook.com/intern/diff/D24271061) [ghstack-poisoned]
…ing::_quantize" Differential Revision: [D24271061](https://our.internmc.facebook.com/intern/diff/D24271061) [ghstack-poisoned]
💊 CI failures summary and remediationsAs of commit f027bc5 (more details on the Dr. CI page):
Extra GitHub checks: 1 failed
codecov.io: 1 failed
This comment was automatically generated by Dr. CI (expand for details).Follow this link to opt-out of these comments for your Pull Requests.Please report bugs/suggestions on the GitHub issue tracker or post in the (internal) Dr. CI Users group. This comment has been revised 12 times. |
…ing::_quantize" Differential Revision: [D24271061](https://our.internmc.facebook.com/intern/diff/D24271061) [ghstack-poisoned]
Codecov Report
@@ Coverage Diff @@
## gh/z-a-f/74/base #46232 +/- ##
====================================================
- Coverage 68.34% 68.33% -0.01%
====================================================
Files 410 410
Lines 53804 53813 +9
====================================================
+ Hits 36770 36771 +1
- Misses 17034 17042 +8
Continue to review full report at Codecov.
|
qx = np.round(x / scale + zero_point).astype(np.int64) | ||
qx = np.clip(qx, qmin, qmax) | ||
qx = qx.astype(dtype) | ||
return qx | ||
return qx, scale, zero_point |
There was a problem hiding this comment.
Choose a reason for hiding this comment
The reason will be displayed to describe this comment to others. Learn more.
Are we actually using the returned scale and zero_point anywhere in this PR?
There was a problem hiding this comment.
Choose a reason for hiding this comment
The reason will be displayed to describe this comment to others. Learn more.
Only in one case I needed it -- this is optional, but because the current function returns the int_repr
of the x, it losses information about the scale and zero_point, especially if it is inferred internally in this function
@@ -21,16 +21,27 @@ def _conv_output_shape(input_size, kernel_size, padding, stride, dilation, | |||
* (dilation - 1)) / stride) + 2 * output_padding + 1 | |||
|
|||
# Quantization references | |||
def _quantize(x, scale, zero_point, qmin=None, qmax=None, dtype=np.uint8): | |||
def _quantize(x, scale=None, zero_point=None, qmin=None, qmax=None, | |||
dtype=np.uint8): |
There was a problem hiding this comment.
Choose a reason for hiding this comment
The reason will be displayed to describe this comment to others. Learn more.
Might be a good idea to update the docstring for this function to reflect the new behavior.
There was a problem hiding this comment.
Choose a reason for hiding this comment
The reason will be displayed to describe this comment to others. Learn more.
Do you mean the comment here? This is just an internal function for testing -- do we have a docstring?
…ing::_quantize" Differential Revision: [D24271061](https://our.internmc.facebook.com/intern/diff/D24271061) [ghstack-poisoned]
Looks like this PR hasn't been updated in a while so we're going to go ahead and mark this as |
Stack from ghstack:
Differential Revision: D24271061