Skip to content

Commit 532b435

Browse files
committed
stylististic changes
1 parent 6b84afb commit 532b435

File tree

1 file changed

+6
-7
lines changed

1 file changed

+6
-7
lines changed

docs/source/perf/dynamo.md

Lines changed: 6 additions & 7 deletions
Original file line numberDiff line numberDiff line change
@@ -4,7 +4,7 @@
44
Python-level JIT compiler designed to make unmodified PyTorch programs
55
faster. It provides a clean API for compiler backends to hook in and its
66
biggest feature is to dynamically modify Python bytecode right before it
7-
is executed. In the pytorch/xla 2.0 release, PyTorch/XLA provided an
7+
is executed. In the 2.0 release, PyTorch/XLA provided an
88
experimental backend for the TorchDynamo for both inference and
99
training.
1010

@@ -52,7 +52,7 @@ def eval_model(loader):
5252
```
5353

5454
With the `torch.compile` you will see that PyTorch/XLA only traces the
55-
resent18 model once during the init time and executes the compiled
55+
ResNet-18 model once during the init time and executes the compiled
5656
binary every time `dynamo_resnet18` is invoked, instead of tracing the
5757
model every time. Here is a inference speed analysis to compare Dynamo
5858
and Lazy using torch bench on Cloud TPU v4-8
@@ -112,7 +112,7 @@ and Lazy using torch bench on Cloud TPU v4-8
112112

113113
PyTorch/XLA also supports Dynamo for training, but it is experimental
114114
and we are working with the PyTorch Compiler team to iterate on the
115-
implementation. Here is an example of training a resnet18 with
115+
implementation. Here is an example of training a ResNet-18 with
116116
`torch.compile`
117117

118118
``` python
@@ -218,7 +218,7 @@ training.
218218

219219
## Take away
220220

221-
TorchDynamo provides a really promising way for the compiler backend to
221+
TorchDynamo provides a promising way for the compiler backend to
222222
hide the complexity from the user and easily retrieve the modeling code
223223
in a graph format. Compared with PyTorch/XLA's traditional Lazy Tensor
224224
way of extracting the graph, TorchDynamo can skip the graph tracing for
@@ -228,6 +228,5 @@ Most models supported by PyTorch/XLA, have seen significant speedup when
228228
running inference with the new dynamo-xla bridge. Our community is
229229
working hard to expand the set of supported models. Regarding the
230230
training feature gaps mentioned above, the PyTorch/XLA community is
231-
super excited to improve the training gap in our upcoming development
232-
work. The team continues to heavily invest in TorchDynamo and work with
233-
the upstream to mature the training story.
231+
excited to improve the training gap in our upcoming development
232+
work. The team continues to invest in TorchDynamo.

0 commit comments

Comments
 (0)