Skip to content

Commit

Permalink
[BE] Do not use unicode quotes (#99446)
Browse files Browse the repository at this point in the history
They are mostly used in commented code examples, but even Python-3.12
does not recognize `“foobar”` as valid string literal

I.e. just `s/[“”]/"/`

Pull Request resolved: #99446
Approved by: https://github.com/huydhn, https://github.com/ezyang
  • Loading branch information
malfet authored and pytorchmergebot committed Apr 18, 2023
1 parent 2b49a73 commit 8a89eec
Show file tree
Hide file tree
Showing 3 changed files with 8 additions and 8 deletions.
6 changes: 3 additions & 3 deletions torch/_dynamo/variables/builder.py
Original file line number Diff line number Diff line change
Expand Up @@ -1136,18 +1136,18 @@ def wrap_to_fake_tensor_and_record(
curr_sizes = None
if name not in tx.output.frame_state:
# If there is no entry for this source, add the tensor to frame state with its current static size.
# E.g., {} -> {“x”: [2, 4]}
# E.g., {} -> {"x": [2, 4]}
curr_sizes = list(e.size())
else:
curr_sizes = tx.output.frame_state[name]
if curr_sizes is not None:
if e.ndim != len(curr_sizes):
# If there is already an entry, and the dim mismatches, replace the frame state entry with None.
# E.g. {“x”: [2, 3, 4]} -> {“x”: None}
# E.g. {"x": [2, 3, 4]} -> {"x": None}
curr_sizes = None
else:
# If there is already an entry, and the dim matches, for every size in the frame state which
# disagrees with the current static size, replace it with None. E.g., {“x”: [2, 3]} -> {“x”: [2, None]}
# disagrees with the current static size, replace it with None. E.g., {"x": [2, 3]} -> {"x": [2, None]}
for i, dim in enumerate(curr_sizes):
if e.size()[i] != dim:
curr_sizes[i] = None
Expand Down
2 changes: 1 addition & 1 deletion torch/_functorch/autograd_function.py
Original file line number Diff line number Diff line change
Expand Up @@ -500,7 +500,7 @@ def get_tangents_in_dims(input_dims, tangents):
# def backward_no_context(gy):
# return gy.expand([B, 4])
#
# gx = vmap(backward_no_context, dims)(gy: Tensor[B])
# gx = vmap(backward_no_context, dims)(gy: "Tensor[B]")
#
# This gives us the wrong result (gx has shape [B, B, 4], but it should
# have shape [4]). Performing vmap over setup_context means the shape
Expand Down
8 changes: 4 additions & 4 deletions torch/ao/quantization/fx/README.md
Original file line number Diff line number Diff line change
Expand Up @@ -202,10 +202,10 @@ The overall logic to insert QDQStub1 and QDQStub2 inplace is the following:
# node_name_to_target_dtype_info =
# {
# # this is placeholder node in FX Graph
# input : {input_activation: torch.float32, output_activation: torch.float32},
# qat_linear_relu: {input_activation: torch.quint8, output_activation: torch.quint8, weight: ...}
# "input" : {"input_activation": torch.float32, "output_activation": torch.float32},
# "qat_linear_relu": {"input_activation": torch.quint8, "output_activation": torch.quint8, "weight": ...}
# # this is the return node in FX Graph
# output: {input_activation: torch.float32, output_activation: torch.float32}
# "output": {"input_activation": torch.float32, "output_activation": torch.float32}
# }
```
Note: this map is generated before we insert qdqstub to graph1, and will not change in the process.
Expand Down Expand Up @@ -259,7 +259,7 @@ Let’s say the output of `qat_linear_relu` Node is configured as float32, both
}
```

What we’ll do here is when we are trying to insert output QDQStub for `qat_linear_relu`, we look at the target output dtype for this node (node_name_to_target_dtype_info[qat_linear_relu][output_activation], and find that it is float, which is not a quantized dtype, so
What we’ll do here is when we are trying to insert output QDQStub for `qat_linear_relu`, we look at the target output dtype for this node (node_name_to_target_dtype_info["qat_linear_relu"]["output_activation"], and find that it is float, which is not a quantized dtype, so
will do nothing here.
Note that this does not prevent other operators following `qat_linear_relu` to insert a QDQStub at the output of `qat_linear_relu`, since we are dealing with an `edge` of the graph here, and an `edge` is connected to two nodes, which means
the output of `qat_linear_relu` will also be the input of a node following `qat_linear_relu`.
Expand Down

0 comments on commit 8a89eec

Please sign in to comment.