4
4
Python-level JIT compiler designed to make unmodified PyTorch programs
5
5
faster. It provides a clean API for compiler backends to hook in and its
6
6
biggest feature is to dynamically modify Python bytecode right before it
7
- is executed. In the pytorch/xla 2.0 release, PyTorch/XLA provided an
7
+ is executed. In the 2.0 release, PyTorch/XLA provided an
8
8
experimental backend for the TorchDynamo for both inference and
9
9
training.
10
10
@@ -52,7 +52,7 @@ def eval_model(loader):
52
52
```
53
53
54
54
With the ` torch.compile ` you will see that PyTorch/XLA only traces the
55
- resent18 model once during the init time and executes the compiled
55
+ ResNet-18 model once during the init time and executes the compiled
56
56
binary every time ` dynamo_resnet18 ` is invoked, instead of tracing the
57
57
model every time. Here is a inference speed analysis to compare Dynamo
58
58
and Lazy using torch bench on Cloud TPU v4-8
@@ -112,7 +112,7 @@ and Lazy using torch bench on Cloud TPU v4-8
112
112
113
113
PyTorch/XLA also supports Dynamo for training, but it is experimental
114
114
and we are working with the PyTorch Compiler team to iterate on the
115
- implementation. Here is an example of training a resnet18 with
115
+ implementation. Here is an example of training a ResNet-18 with
116
116
` torch.compile `
117
117
118
118
``` python
@@ -218,7 +218,7 @@ training.
218
218
219
219
## Take away
220
220
221
- TorchDynamo provides a really promising way for the compiler backend to
221
+ TorchDynamo provides a promising way for the compiler backend to
222
222
hide the complexity from the user and easily retrieve the modeling code
223
223
in a graph format. Compared with PyTorch/XLA's traditional Lazy Tensor
224
224
way of extracting the graph, TorchDynamo can skip the graph tracing for
@@ -228,6 +228,5 @@ Most models supported by PyTorch/XLA, have seen significant speedup when
228
228
running inference with the new dynamo-xla bridge. Our community is
229
229
working hard to expand the set of supported models. Regarding the
230
230
training feature gaps mentioned above, the PyTorch/XLA community is
231
- super excited to improve the training gap in our upcoming development
232
- work. The team continues to heavily invest in TorchDynamo and work with
233
- the upstream to mature the training story.
231
+ excited to improve the training gap in our upcoming development
232
+ work. The team continues to invest in TorchDynamo.
0 commit comments