Skip to content
Closed
Show file tree
Hide file tree
Changes from all commits
Commits
Show all changes
29 commits
Select commit Hold shift + click to select a range
d1147d7
[quant] Add dequantize.tensors
jerryzh168 Mar 6, 2020
74d5bf4
Update on "[quant] Add dequantize.tensors"
jerryzh168 Mar 7, 2020
a9084ff
Update on "[quant] Add dequantize.tensors"
jerryzh168 Mar 7, 2020
9921674
Update on "[quant] Add dequantize.tensors"
jerryzh168 Mar 9, 2020
7587dd6
Update on "[quant] Add dequantize.tensors"
jerryzh168 Mar 10, 2020
5ea41d6
Update on "[quant] Add dequantize.tensors"
jerryzh168 Mar 11, 2020
78c3abc
Update on "[quant] Add dequantize.tensors"
jerryzh168 Mar 11, 2020
f3da80e
Update on "[quant] Add dequantize.tensors"
jerryzh168 Mar 12, 2020
45f7d4f
Update on "[quant] Add dequantize.tensors"
jerryzh168 Mar 16, 2020
d092b5f
Update on "[quant] Add dequantize.tensors"
jerryzh168 Mar 17, 2020
5151729
Update on "[quant] Add dequantize.tensors"
jerryzh168 Mar 17, 2020
78d6231
Update on "[quant] Add dequantize.tensors"
jerryzh168 Mar 17, 2020
1eb83d7
Update on "[quant] Add dequantize.tensors"
jerryzh168 Mar 17, 2020
ec3541c
Update on "[quant] Add dequantize.tensors"
jerryzh168 Mar 18, 2020
9b3eb77
Update on "[quant] Add dequantize.tensors"
jerryzh168 Mar 18, 2020
c381bb5
Update on "[quant] Add dequantize.tensors"
jerryzh168 Mar 18, 2020
9a93036
Update on "[quant] Add dequantize.tensors"
jerryzh168 Mar 19, 2020
c1eaaed
Update on "[quant] Add dequantize.tensors"
jerryzh168 Mar 19, 2020
8a6d2a6
Update on "[quant] Add dequantize.tensors"
jerryzh168 Mar 19, 2020
971cd89
Update on "[quant] Add dequantize.tensors"
jerryzh168 Mar 19, 2020
7285268
Update on "[quant] Add dequantize.tensors"
jerryzh168 Mar 19, 2020
793d02b
Update on "[quant] Add dequantize.tensors"
jerryzh168 Mar 19, 2020
11ae851
Update on "[quant] Add dequantize.tensors"
jerryzh168 Mar 19, 2020
3098c87
Update on "[quant] Add dequantize.tensors"
jerryzh168 Mar 20, 2020
053898f
Update on "[quant] Add dequantize.tensors"
jerryzh168 Mar 20, 2020
8f768ff
Update on "[quant] Add dequantize.tensors"
jerryzh168 Mar 20, 2020
e73f1d0
Update on "[quant] Add dequantize.tensors"
jerryzh168 Mar 20, 2020
f4696de
Update on "[quant] Add dequantize.tensors"
jerryzh168 Mar 20, 2020
d60b690
Update on "[quant] Add dequantize.tensors"
jerryzh168 Mar 21, 2020
File filter

Filter by extension

Filter by extension

Conversations
Failed to load comments.
Loading
Jump to
Jump to file
Failed to load files.
Loading
Diff view
Diff view
7 changes: 6 additions & 1 deletion aten/src/ATen/native/native_functions.yaml
Original file line number Diff line number Diff line change
Expand Up @@ -3629,12 +3629,17 @@
dispatch:
CPU: quantize_per_channel_cpu

- func: dequantize(Tensor self) -> Tensor
- func: dequantize.self(Tensor self) -> Tensor
use_c10_dispatcher: full
variants: function, method
dispatch:
QuantizedCPU: dequantize_quant

- func: dequantize.tensors(Tensor[] tensors) -> Tensor[]
Copy link
Contributor

Choose a reason for hiding this comment

The reason will be displayed to describe this comment to others. Learn more.

Why is ListConstruct special? What about other structures like Tuple or Dict of tensors?

Copy link
Contributor Author

Choose a reason for hiding this comment

The reason will be displayed to describe this comment to others. Learn more.

we'll need to handle those as well if they come up in real use cases, but by far I didn't see that. Reason we have lists are mostly because we have quantized ops like cat that takes list as input. if we have quantized ops that takes Dict or Tuple as inputs we'll probably need to add support for these other types as well

variants: function
dispatch:
QuantizedCPU: dequantize_tensors_quant

- func: q_scale(Tensor self) -> float
use_c10_dispatcher: full
variants: function, method
Expand Down
8 changes: 8 additions & 0 deletions aten/src/ATen/native/quantized/QTensor.cpp
Original file line number Diff line number Diff line change
Expand Up @@ -32,6 +32,14 @@ Tensor dequantize_quant(const Tensor& self) {
return get_qtensorimpl(self)->quantizer()->dequantize(self);
}

std::vector<Tensor> dequantize_tensors_quant(TensorList tensors) {
Copy link
Contributor

Choose a reason for hiding this comment

The reason will be displayed to describe this comment to others. Learn more.

Don't return a std::vector, return a TensorList instead (TensorList == torch::List). std::vector requires a copy behind the scenes.

Copy link
Contributor Author

Choose a reason for hiding this comment

The reason will be displayed to describe this comment to others. Learn more.

I think that is not the type generated by native_functions.yaml, I looked up another function that returns Tensor[] and it says:

795:std::vector<Tensor> meshgrid(TensorList tensors); // {"schema": "aten::meshgrid(Tensor[] tensors) -> Tensor[]", "compound": "true"}

std::vector<Tensor> dequantized_tensors;
for (auto i = 0; i < tensors.size(); ++i) {
Copy link

Choose a reason for hiding this comment

The reason will be displayed to describe this comment to others. Learn more.

can we use parallel for here?

Copy link
Contributor Author

Choose a reason for hiding this comment

The reason will be displayed to describe this comment to others. Learn more.

I would delay this optimization until there is a need, most likely this function will never be run since it will be fused into the quantized::cat pattern.

dequantized_tensors.push_back(tensors[i].dequantize());
}
return dequantized_tensors;
}

double q_scale_quant(const Tensor& self) {
auto quantizer = get_qtensorimpl(self)->quantizer();
TORCH_CHECK(quantizer->qscheme() == kPerTensorAffine);
Expand Down
1 change: 1 addition & 0 deletions docs/source/torch.rst
Original file line number Diff line number Diff line change
Expand Up @@ -55,6 +55,7 @@ Creation Ops
.. autofunction:: full_like
.. autofunction:: quantize_per_tensor
.. autofunction:: quantize_per_channel
.. autofunction:: dequantize

Indexing, Slicing, Joining, Mutating Ops
~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~
Expand Down
Original file line number Diff line number Diff line change
Expand Up @@ -112,6 +112,7 @@
('aten::_linear_prepack', datetime.date(2020, 4, 1)),
('aten::_conv2d_packed', datetime.date(2020, 4, 1)),
('aten::_conv2d_prepack', datetime.date(2020, 4, 1)),
('aten::dequantize', datetime.date(2020, 4, 1)),
('aten::confirmed_by_owner', datetime.date(2020, 3, 17)),
('aten::owner', datetime.date(2020, 3, 27)),
('aten::owner_name', datetime.date(2020, 3, 27)),
Expand Down
1 change: 1 addition & 0 deletions tools/pyi/gen_pyi.py
Original file line number Diff line number Diff line change
Expand Up @@ -92,6 +92,7 @@
'div_',
'div_out',
'floor_divide', 'floor_divide_', 'floor_divide_out',
'dequantize',
]


Expand Down
9 changes: 9 additions & 0 deletions torch/__init__.pyi.in
Original file line number Diff line number Diff line change
Expand Up @@ -158,3 +158,12 @@ def compiled_with_cxx11_abi() -> _bool: ...
# (similar to `unique`, `lu`, etc.); as such, it is not
# possible to type correctly
def nonzero(input: Tensor, *, out: Optional[Tensor]=None, as_tuple: Optional[_bool]=None): ...

# we can't auto generate hints for torch.dequantize because it will generate
# `dequantize(*tensors) -> Union[Tuple[Tensor, ...], List[Tensor]]: ...`
# which overlaps with
# `dequantize(self: Tensor) -> Tensor: ...
@overload
def dequantize(self: Tensor) -> Tensor: ...
@overload
def dequantize(tensors: Union[Tuple[Tensor, ...], List[Tensor]]) -> Union[Tuple[Tensor, ...], List[Tensor]]: ...
17 changes: 17 additions & 0 deletions torch/_torch_docs.py
Original file line number Diff line number Diff line change
Expand Up @@ -1513,6 +1513,23 @@ def merge_dicts(*dicts):
-1.8209, -2.9780, -3.4022])
""".format(**reduceops_common_args))

add_docstr(torch.dequantize,
r"""
.. function:: dequantize(tensor) -> Tensor

Given a quantized Tensor, dequantize it and return an fp32 Tensor

Args:
tensor (Tensor): A quantized Tensor

.. function:: dequantize(tensors) -> sequence of Tensors

Given a list of quantized Tensors, dequantize them and return a list of fp32 Tensors

Args:
tensors (sequence of Tensors): A list of quantized Tensors
""")

add_docstr(torch.diag,
r"""
diag(input, diagonal=0, out=None) -> Tensor
Expand Down