Skip to content

Commit c3fc611

Browse files
committed
fix typo
1 parent 4e45d57 commit c3fc611

File tree

1 file changed

+2
-2
lines changed

1 file changed

+2
-2
lines changed

docs/faq/onnx.mdx

Lines changed: 2 additions & 2 deletions
Original file line numberDiff line numberDiff line change
@@ -18,7 +18,7 @@ The framework prioritizes dynamic `batch` or static `batch` with `batchsize==1`.
1818
- When the batch dimension is specified as dynamic size, low-version TensorRT has weaker processing capabilities and more redundant operators. For example, for ``x.view(x.size(0), -1)``, Gather and other operators will be introduced in ONNX to calculate the first dimension of x. It can be modified to ``x = x.view(-1, int(x.size(1)*x.size(2)*x.size(3)))`` or ``x = torch.flatten(x, 1)``. This is not necessary.
1919
- For some models (TensorRT 8.5.1, LSTM, and Transformer), when the batch dimension and non-batch dimension are both dynamic, more resources may be consumed:
2020
- For LayerNorm layers and Transformer-like networks with dynamic batch size, opset>=17 and TensorRT>=8.6.1 are recommended.
21-
:::
21+
2222

2323
```bash
2424
# When both batch and non-batch dimensions are dynamic, it takes 9ms (inference input size is optShapes=input:1x1000x80,mask:1x1x1000):
@@ -104,7 +104,7 @@ torchpipe.utils.models.onnx_export(m, onnx_path, torch.randn(1, 3, 224, 224), op
104104
</details>
105105
</details>
106106

107-
### 转换失败说明
107+
### Reasons for Conversion Failure
108108

109109

110110
When converting from torch to ONNX, it is common to encounter conversion failures. Here are some methods that can be used:

0 commit comments

Comments
 (0)