Skip to content
Closed
Show file tree
Hide file tree
Changes from all commits
Commits
Show all changes
65 commits
Select commit Hold shift + click to select a range
1a599fa
set layer name
wu6u3tw Sep 19, 2023
c875c39
FX converter documentation (#2039)
apbose Sep 21, 2023
19aabdd
aten::split converter (#2232)
apbose Sep 21, 2023
0a939df
DLFW changes (#2281)
apbose Sep 21, 2023
ff4d940
feat: Add ATen lowering pass system (#2280)
gs-olive Sep 22, 2023
65feab1
fix: Support non -1 end idx and <0 start idx in aten::flatten convert…
mfeliz-cruise Sep 22, 2023
e6e8099
docs: [Automated] Regenerating documenation for 65feab1
Sep 22, 2023
3c4c2fe
support for torch.ops.aten.erf.default op
bowang007 Aug 2, 2023
670d2be
feat: support Dynamo converter for torch.ops.aten.erf.default op
bowang007 Sep 22, 2023
ecdc040
fix: Update Torchvision version to address dependency resolution issu…
gs-olive Sep 25, 2023
7daa112
fix: Remove input aliasing of builtin ops (#2276)
gs-olive Sep 26, 2023
b2aa255
docs: [Automated] Regenerating documenation for 7daa112
Sep 26, 2023
1033dff
fix: Allow low rank inputs in Python Runtime (#2282)
gs-olive Sep 27, 2023
76de80d
docs: [Automated] Regenerating documenation for 1033dff
Sep 27, 2023
338e542
fix: Address multi-GPU issue in engine deserialize (#2325)
gs-olive Sep 27, 2023
117161a
docs: [Automated] Regenerating documenation for 338e542
Sep 27, 2023
251405d
feat: support deconv (1d, 2d, and Nd) dynamo converter (#2337)
zewenli98 Sep 27, 2023
a2a983b
docs: [Automated] Regenerating documenation for 251405d
Sep 27, 2023
bece720
Update usage of PyTorch's custom op API (#2193)
zou3519 Sep 28, 2023
78f2721
docs: [Automated] Regenerating documenation for bece720
Sep 28, 2023
765933a
feat: support bmm converter in dynamo (#2248)
bowang007 Sep 28, 2023
0d402fb
docs: [Automated] Regenerating documenation for 765933a
Sep 28, 2023
891c2ef
feat: support 1D, 2D, and 3D avg and max pooling dynamo converters (#…
zewenli98 Sep 29, 2023
253bbd1
docs: [Automated] Regenerating documenation for 891c2ef
Sep 29, 2023
46cfa35
fix: Add support for negative dimensions in reduce (#2347)
gs-olive Sep 29, 2023
5de208f
docs: [Automated] Regenerating documenation for 46cfa35
Sep 29, 2023
42e514b
feat: Add tensor type enforcement for converters (#2324)
gs-olive Sep 29, 2023
ab1d7d4
docs: [Automated] Regenerating documenation for 42e514b
Sep 29, 2023
558ae7c
fix: Issue in TS dimension-squeeze utility (#2336)
gs-olive Sep 29, 2023
ef07bea
docs: [Automated] Regenerating documenation for 558ae7c
Sep 29, 2023
8ebf24d
perf: Add lowering passes to improve TRT runtime on SD (#2351)
gs-olive Sep 29, 2023
8c25baf
docs: [Automated] Regenerating documenation for 8ebf24d
Sep 29, 2023
6571252
feat: Implement Dynamic shapes + fallback support for export path (#2…
peri044 Oct 2, 2023
a7f9055
docs: [Automated] Regenerating documenation for 6571252
Oct 2, 2023
4f72425
feat: Add maxpool lowering passes and experimental folder in Dynamo (…
gs-olive Oct 3, 2023
5bb8cb0
docs: [Automated] Regenerating documenation for 4f72425
Oct 4, 2023
e432bf2
Aten::Index converter (#2277)
apbose Oct 4, 2023
7e5d05f
docs: [Automated] Regenerating documenation for e432bf2
Oct 4, 2023
7b21322
feat: Implement support for exporting Torch-TensorRT compiled graphs …
peri044 Oct 4, 2023
4cffd6e
docs: [Automated] Regenerating documenation for 7b21322
Oct 4, 2023
6e0e2d4
update naming
wu6u3tw Oct 4, 2023
22cf701
chore: Switch converter tests to generate standalone ops using fx.sym…
peri044 Oct 5, 2023
16c670a
docs: [Automated] Regenerating documenation for 22cf701
Oct 5, 2023
c61d97e
fix/feat: Add and repair multiple converters for SD + other models (#…
gs-olive Oct 6, 2023
6d59a14
docs: [Automated] Regenerating documenation for c61d97e
Oct 6, 2023
d375d10
feat: support flatten and reshape via shuffle_layer (#2354)
zewenli98 Oct 6, 2023
65e8ec7
docs: [Automated] Regenerating documenation for d375d10
Oct 6, 2023
80bbd8b
feat: support prod, max, min, and mean via reduce layer (#2355)
zewenli98 Oct 6, 2023
18dcdd0
minor fix: Update `get_ir` prefixes (#2369)
gs-olive Oct 6, 2023
83176fe
Dynamo converter cat (#2343)
apbose Oct 6, 2023
4ceb796
update fix
wu6u3tw Oct 7, 2023
3d1be7e
format
wu6u3tw Oct 9, 2023
2153cb9
set layer name
wu6u3tw Sep 19, 2023
3a0513a
update naming
wu6u3tw Oct 4, 2023
c499d94
update fix
wu6u3tw Oct 7, 2023
910e014
format
wu6u3tw Oct 9, 2023
a37ca1e
Merge branch 'dev-set_name' of github.com:wu6u3tw/TensorRT into dev-s…
wu6u3tw Oct 9, 2023
f395f83
set layer name
wu6u3tw Sep 19, 2023
4b6b5f4
update naming
wu6u3tw Oct 4, 2023
e085f20
update fix
wu6u3tw Oct 7, 2023
917e6ab
format
wu6u3tw Oct 9, 2023
2359c0b
set layer name
wu6u3tw Sep 19, 2023
db547b5
update naming
wu6u3tw Oct 4, 2023
88e1c2c
update fix
wu6u3tw Oct 7, 2023
d4d1c41
fix
wu6u3tw Oct 9, 2023
File filter

Filter by extension

Filter by extension


Conversations
Failed to load comments.
Loading
Jump to
The table of contents is too big for display.
Diff view
Diff view
  •  
  •  
  •  
19 changes: 17 additions & 2 deletions .circleci/config.yml
Original file line number Diff line number Diff line change
Expand Up @@ -802,7 +802,7 @@ commands:
- store_artifacts:
path: /tmp/testlogs

test-dynamo-models_torch_export:
test-dynamo-models_export:
description: "Test the Dynamo models via torch_export path"
steps:
- run:
Expand All @@ -818,6 +818,20 @@ commands:
- store_artifacts:
path: /tmp/testlogs

test-dynamo-export_serde:
description: "Test the export serialize/deserialize functionality for Dynamo models"
steps:
- run:
name: Run Dynamo models and test export serde with TRT compiled modules
command: |
cd tests/py/dynamo/models
pytest test_export_serde.py --junitxml=/tmp/artifacts/test_results/dynamo/backend/test_results.xml --ir dynamo

- store_test_results:
path: /tmp/artifacts
- store_artifacts:
path: /tmp/testlogs

test-dynamo-converters:
description: "Test the Dynamo aten converters"
steps:
Expand Down Expand Up @@ -1122,7 +1136,8 @@ jobs:
- test-dynamo-backend
- test-dynamo-shared_utilities
- test-dynamo-models_torch_compile
- test-dynamo-models_torch_export
- test-dynamo-models_export
- test-dynamo-export_serde

package-x86_64-linux:
parameters:
Expand Down
2 changes: 2 additions & 0 deletions .github/workflows/build-test.yml
Original file line number Diff line number Diff line change
Expand Up @@ -141,6 +141,8 @@ jobs:
cd tests/py/dynamo
${CONDA_RUN} python -m pip install --pre pytest timm transformers parameterized expecttest --use-deprecated=legacy-resolver
${CONDA_RUN} python -m pytest --junitxml=${RUNNER_TEST_RESULTS_DIR}/dynamo_fe_test_results.xml --ir dynamo models/test_models_export.py
${CONDA_RUN} python -m pytest --junitxml=${RUNNER_TEST_RESULTS_DIR}/export_serde_test_results.xml --ir dynamo models/test_export_serde.py
${CONDA_RUN} python -m pytest --junitxml=${RUNNER_TEST_RESULTS_DIR}/dyn_models_export.xml --ir dynamo models/test_dyn_models.py
popd

tests-py-torch-compile-be:
Expand Down
2 changes: 1 addition & 1 deletion .pre-commit-config.yaml
Original file line number Diff line number Diff line change
Expand Up @@ -40,7 +40,7 @@ repos:
rev: 'v1.4.1'
hooks:
- id: mypy
exclude: "^py/torch_tensorrt/fx|^examples|^tests|^tools|^docs|noxfile.py|setup.py|versions.py"
exclude: "^py/torch_tensorrt/fx|^examples|^tests|^py/torch_tensorrt/dynamo/_experimental|^tools|^docs|noxfile.py|setup.py|versions.py"
- repo: https://github.com/astral-sh/ruff-pre-commit
# Ruff version.
rev: v0.0.278
Expand Down
7 changes: 6 additions & 1 deletion core/conversion/converters/impl/shuffle.cpp
Original file line number Diff line number Diff line change
Expand Up @@ -20,7 +20,12 @@ static auto shuffle_registrations TORCHTRT_UNUSED =
auto in_shape = util::toVec(in->getDimensions());
std::vector<int64_t> out_shape;
if (ctx->input_is_dynamic) {
end_dim = (end_dim == -1) ? in_shape.size() - 1 : end_dim;
if (start_dim < 0) {
start_dim = start_dim + in_shape.size();
}
if (end_dim < 0) {
end_dim = end_dim + in_shape.size();
}
int nbDynamicFlattenedDims = 0;
int nbDynamicUnflattenedDims = 0;
for (int i = 0; i < (int)in_shape.size(); i++) {
Expand Down
6 changes: 3 additions & 3 deletions core/runtime/execute_engine.cpp
Original file line number Diff line number Diff line change
Expand Up @@ -43,8 +43,8 @@ bool is_switch_required(const RTDevice& curr_device, const RTDevice& engine_devi
return false;
}

RTDevice select_rt_device(const RTDevice& engine_device) {
auto new_target_device_opt = get_most_compatible_device(engine_device);
RTDevice select_rt_device(const RTDevice& engine_device, const RTDevice& curr_device) {
auto new_target_device_opt = get_most_compatible_device(engine_device, curr_device);

// REVIEW: THIS DOES NOT LIST DLA PROBABLY, WHICH WE SHOULD
// TODO: I think this logic could be way simpler at execution time since if the tensors arent on the right
Expand Down Expand Up @@ -89,7 +89,7 @@ std::vector<at::Tensor> execute_engine(std::vector<at::Tensor> inputs, c10::intr

if (is_switch_required(curr_device, compiled_engine->device_info)) {
// Scan through available CUDA devices and set the CUDA device context correctly
RTDevice device = select_rt_device(compiled_engine->device_info);
RTDevice device = select_rt_device(compiled_engine->device_info, curr_device);
set_rt_device(device);

// Target device is new device
Expand Down
27 changes: 22 additions & 5 deletions core/runtime/runtime.cpp
Original file line number Diff line number Diff line change
Expand Up @@ -7,9 +7,16 @@ namespace torch_tensorrt {
namespace core {
namespace runtime {

c10::optional<RTDevice> get_most_compatible_device(const RTDevice& target_device) {
c10::optional<RTDevice> get_most_compatible_device(const RTDevice& target_device, const RTDevice& curr_device) {
LOG_DEBUG("Target Device: " << target_device);
auto device_options = find_compatible_devices(target_device);
RTDevice current_device;
if (current_device.id == -1) {
current_device = get_current_device();
} else {
current_device = curr_device;
}

if (device_options.size() == 0) {
return {};
} else if (device_options.size() == 1) {
Expand All @@ -21,10 +28,20 @@ c10::optional<RTDevice> get_most_compatible_device(const RTDevice& target_device
dev_list << "[" << std::endl;
for (auto device : device_options) {
dev_list << " " << device << ',' << std::endl;
if (device.device_name == target_device.device_name && best_match.device_name != target_device.device_name) {
best_match = device;
} else if (device.device_name == target_device.device_name && best_match.device_name == target_device.device_name) {
if (device.id == target_device.id && best_match.id != target_device.id) {
if (device.device_name == target_device.device_name) {
// First priority is selecting a candidate which agrees with the current device ID
// If such a device is found, we can select it and break out of the loop
if (device.id == current_device.id && best_match.id != current_device.id) {
best_match = device;
break;
}
// Second priority is selecting a candidate which agrees with the target device ID
// At deserialization time, the current device and target device may not agree
else if (device.id == target_device.id && best_match.id != target_device.id) {
best_match = device;
}
// If no such GPU ID is found, select the first available candidate GPU
else if (best_match.device_name != target_device.device_name) {
best_match = device;
}
}
Expand Down
4 changes: 3 additions & 1 deletion core/runtime/runtime.h
Original file line number Diff line number Diff line change
Expand Up @@ -26,7 +26,9 @@ typedef enum {
SERIALIZATION_LEN, // NEVER USED FOR DATA, USED TO DETERMINE LENGTH OF SERIALIZED INFO
} SerializedInfoIndex;

c10::optional<RTDevice> get_most_compatible_device(const RTDevice& target_device);
c10::optional<RTDevice> get_most_compatible_device(
const RTDevice& target_device,
const RTDevice& curr_device = RTDevice());
std::vector<RTDevice> find_compatible_devices(const RTDevice& target_device);

std::vector<at::Tensor> execute_engine(std::vector<at::Tensor> inputs, c10::intrusive_ptr<TRTEngine> compiled_engine);
Expand Down
2 changes: 1 addition & 1 deletion core/util/trt_util.cpp
Original file line number Diff line number Diff line change
Expand Up @@ -216,7 +216,7 @@ nvinfer1::Dims squeezeDims(const nvinfer1::Dims& d, int pos, bool use_zeros, boo
// Replace all instances of -1, indicating dynamic dimension
// with 0, indicating copy the dimension from another tensor
// (Generally used for reshape operations)
if (use_zeros && d.d[i] == -1) {
if (use_zeros && d.d[i] == -1 && i < pos) {
dims.d[j] = 0;
// If zeros already exist in the dimensions (empty tensor),
// Replace all instances of 0, indicating empty dimension
Expand Down
2 changes: 2 additions & 0 deletions cpp/include/torch_tensorrt/torch_tensorrt.h
Original file line number Diff line number Diff line change
Expand Up @@ -60,6 +60,8 @@ class DataType {
enum Value : int8_t {
/// INT64
kLong,
/// FP64
kDouble,
/// FP32
kFloat,
/// FP16
Expand Down
8 changes: 7 additions & 1 deletion cpp/src/types.cpp
Original file line number Diff line number Diff line change
Expand Up @@ -97,6 +97,8 @@ at::ScalarType toAtenDataType(DataType value) {
return at::kInt;
case DataType::kLong:
return at::kLong;
case DataType::kDouble:
return at::kDouble;
case DataType::kBool:
return at::kBool;
case DataType::kFloat:
Expand All @@ -119,7 +121,8 @@ nvinfer1::TensorFormat toTRTTensorFormat(TensorFormat value) {

DataType::DataType(c10::ScalarType t) {
TORCHTRT_CHECK(
t == at::kHalf || t == at::kFloat || t == at::kChar || t == at::kLong || t == at::kInt || t == at::kBool,
t == at::kHalf || t == at::kFloat || t == at::kChar || t == at::kLong || t == at::kDouble || t == at::kInt ||
t == at::kBool,
"Data type is unsupported (" << t << ")");
switch (t) {
case at::kHalf:
Expand All @@ -134,6 +137,9 @@ DataType::DataType(c10::ScalarType t) {
case at::kLong:
value = DataType::kLong;
break;
case at::kDouble:
value = DataType::kDouble;
break;
case at::kBool:
value = DataType::kBool;
break;
Expand Down
38 changes: 19 additions & 19 deletions docker/WORKSPACE.ngc
Original file line number Diff line number Diff line change
Expand Up @@ -9,24 +9,28 @@ http_archive(
sha256 = "778197e26c5fbeb07ac2a2c5ae405b30f6cb7ad1f5510ea6fdac03bded96cc6f",
)

load("@rules_python//python:pip.bzl", "pip_install")
load("@rules_python//python:repositories.bzl", "py_repositories")

py_repositories()

http_archive(
name = "rules_pkg",
sha256 = "8f9ee2dc10c1ae514ee599a8b42ed99fa262b757058f65ad3c384289ff70c4b8",
urls = [
"https://mirror.bazel.build/github.com/bazelbuild/rules_pkg/releases/download/0.4.0/rules_pkg-0.4.0.tar.gz",
"https://github.com/bazelbuild/rules_pkg/releases/download/0.4.0/rules_pkg-0.4.0.tar.gz",
"https://mirror.bazel.build/github.com/bazelbuild/rules_pkg/releases/download/0.9.1/rules_pkg-0.9.1.tar.gz",
"https://github.com/bazelbuild/rules_pkg/releases/download/0.9.1/rules_pkg-0.9.1.tar.gz",
],
sha256 = "038f1caa773a7e35b3663865ffb003169c6a71dc995e39bf4815792f385d837d",
)

load("@rules_pkg//:deps.bzl", "rules_pkg_dependencies")

rules_pkg_dependencies()

git_repository(
http_archive(
name = "googletest",
remote = "https://github.com/google/googletest",
commit = "703bd9caab50b139428cea1aaff9974ebee5742e",
shallow_since = "1570114335 -0400"
sha256 = "755f9a39bc7205f5a0c428e920ddad092c33c8a1b46997def3f1d4a82aded6e1",
strip_prefix = "googletest-5ab508a01f9eb089207ee87fd547d290da39d015",
urls = ["https://github.com/google/googletest/archive/5ab508a01f9eb089207ee87fd547d290da39d015.zip"],
)

# External dependency for torch_tensorrt if you already have precompiled binaries.
Expand Down Expand Up @@ -80,17 +84,13 @@ new_local_repository(
#########################################################################
# Testing Dependencies (optional - comment out on aarch64)
#########################################################################
pip_install(
name = "torch_tensorrt_py_deps",
requirements = "//py:requirements.txt",
)
load("@rules_python//python:pip.bzl", "pip_parse")

pip_install(
name = "py_test_deps",
requirements = "//tests/py:requirements.txt",
pip_parse(
name = "devtools_deps",
requirements_lock = "//:requirements-dev.txt",
)

pip_install(
name = "pylinter_deps",
requirements = "//tools/linter:requirements.txt",
)
load("@devtools_deps//:requirements.bzl", "install_deps")

install_deps()
13 changes: 11 additions & 2 deletions docs/_cpp_api/classtorch__tensorrt_1_1DataType.html
Original file line number Diff line number Diff line change
Expand Up @@ -10,7 +10,7 @@

<meta name="viewport" content="width=device-width, initial-scale=1.0">

<title>Class DataType &mdash; Torch-TensorRT v2.2.0.dev0+b50290d documentation</title>
<title>Class DataType &mdash; Torch-TensorRT v2.2.0.dev0+d375d10 documentation</title>



Expand Down Expand Up @@ -225,7 +225,7 @@


<div class="version">
v2.2.0.dev0+b50290d
v2.2.0.dev0+d375d10
</div>


Expand Down Expand Up @@ -269,6 +269,8 @@
<li class="toctree-l1"><a class="reference internal" href="../user_guide/getting_started_with_fx_path.html">Torch-TensorRT (FX Frontend) User Guide</a></li>
<li class="toctree-l1"><a class="reference internal" href="../user_guide/ptq.html">Post Training Quantization (PTQ)</a></li>
<li class="toctree-l1"><a class="reference internal" href="../user_guide/runtime.html">Deploying Torch-TensorRT Programs</a></li>
<li class="toctree-l1"><a class="reference internal" href="../user_guide/saving_models.html">Saving models compiled with Torch-TensorRT</a></li>
<li class="toctree-l1"><a class="reference internal" href="../user_guide/dynamic_shapes.html">Dynamic shapes with Torch-TensorRT</a></li>
<li class="toctree-l1"><a class="reference internal" href="../user_guide/use_from_pytorch.html">Using Torch-TensorRT Directly From PyTorch</a></li>
<li class="toctree-l1"><a class="reference internal" href="../user_guide/using_dla.html">DLA</a></li>
</ul>
Expand Down Expand Up @@ -304,6 +306,7 @@
<ul>
<li class="toctree-l1"><a class="reference internal" href="../contributors/system_overview.html">System Overview</a></li>
<li class="toctree-l1"><a class="reference internal" href="../contributors/writing_converters.html">Writing Converters</a></li>
<li class="toctree-l1"><a class="reference internal" href="../contributors/writing_dynamo_aten_lowering_passes.html">Writing Dynamo ATen Lowering Passes</a></li>
<li class="toctree-l1"><a class="reference internal" href="../contributors/useful_links.html">Useful Links for Torch-TensorRT Development</a></li>
</ul>
<p class="caption" role="heading"><span class="caption-text">Indices</span></p>
Expand Down Expand Up @@ -414,6 +417,12 @@ <h2>Class Documentation<a class="headerlink" href="#class-documentation" title="
<dd><p>INT64. </p>
</dd></dl>

<dl class="cpp enumerator">
<dt class="sig sig-object cpp" id="_CPPv4N14torch_tensorrt8DataType5Value7kDoubleE">
<span class="target" id="classtorch__tensorrt_1_1DataType_1a6335c0e206340d85a1382a5df17bf684aacf5b40b44995643185a977d2d1ce1bf"></span><span class="k"><span class="pre">enumerator</span></span><span class="w"> </span><span class="sig-name descname"><span class="n"><span class="pre">kDouble</span></span></span><a class="headerlink" href="#_CPPv4N14torch_tensorrt8DataType5Value7kDoubleE" title="Permalink to this definition">¶</a><br /></dt>
<dd><p>FP64. </p>
</dd></dl>

<dl class="cpp enumerator">
<dt class="sig sig-object cpp" id="_CPPv4N14torch_tensorrt8DataType5Value6kFloatE">
<span class="target" id="classtorch__tensorrt_1_1DataType_1a6335c0e206340d85a1382a5df17bf684a45ceda04c1ab50695a4a6aeaeae99817"></span><span class="k"><span class="pre">enumerator</span></span><span class="w"> </span><span class="sig-name descname"><span class="n"><span class="pre">kFloat</span></span></span><a class="headerlink" href="#_CPPv4N14torch_tensorrt8DataType5Value6kFloatE" title="Permalink to this definition">¶</a><br /></dt>
Expand Down
Original file line number Diff line number Diff line change
Expand Up @@ -10,7 +10,7 @@

<meta name="viewport" content="width=device-width, initial-scale=1.0">

<title>Class Device::DeviceType &mdash; Torch-TensorRT v2.2.0.dev0+b50290d documentation</title>
<title>Class Device::DeviceType &mdash; Torch-TensorRT v2.2.0.dev0+d375d10 documentation</title>



Expand Down Expand Up @@ -225,7 +225,7 @@


<div class="version">
v2.2.0.dev0+b50290d
v2.2.0.dev0+d375d10
</div>


Expand Down Expand Up @@ -269,6 +269,8 @@
<li class="toctree-l1"><a class="reference internal" href="../user_guide/getting_started_with_fx_path.html">Torch-TensorRT (FX Frontend) User Guide</a></li>
<li class="toctree-l1"><a class="reference internal" href="../user_guide/ptq.html">Post Training Quantization (PTQ)</a></li>
<li class="toctree-l1"><a class="reference internal" href="../user_guide/runtime.html">Deploying Torch-TensorRT Programs</a></li>
<li class="toctree-l1"><a class="reference internal" href="../user_guide/saving_models.html">Saving models compiled with Torch-TensorRT</a></li>
<li class="toctree-l1"><a class="reference internal" href="../user_guide/dynamic_shapes.html">Dynamic shapes with Torch-TensorRT</a></li>
<li class="toctree-l1"><a class="reference internal" href="../user_guide/use_from_pytorch.html">Using Torch-TensorRT Directly From PyTorch</a></li>
<li class="toctree-l1"><a class="reference internal" href="../user_guide/using_dla.html">DLA</a></li>
</ul>
Expand Down Expand Up @@ -304,6 +306,7 @@
<ul>
<li class="toctree-l1"><a class="reference internal" href="../contributors/system_overview.html">System Overview</a></li>
<li class="toctree-l1"><a class="reference internal" href="../contributors/writing_converters.html">Writing Converters</a></li>
<li class="toctree-l1"><a class="reference internal" href="../contributors/writing_dynamo_aten_lowering_passes.html">Writing Dynamo ATen Lowering Passes</a></li>
<li class="toctree-l1"><a class="reference internal" href="../contributors/useful_links.html">Useful Links for Torch-TensorRT Development</a></li>
</ul>
<p class="caption" role="heading"><span class="caption-text">Indices</span></p>
Expand Down
7 changes: 5 additions & 2 deletions docs/_cpp_api/classtorch__tensorrt_1_1TensorFormat.html
Original file line number Diff line number Diff line change
Expand Up @@ -10,7 +10,7 @@

<meta name="viewport" content="width=device-width, initial-scale=1.0">

<title>Class TensorFormat &mdash; Torch-TensorRT v2.2.0.dev0+b50290d documentation</title>
<title>Class TensorFormat &mdash; Torch-TensorRT v2.2.0.dev0+d375d10 documentation</title>



Expand Down Expand Up @@ -225,7 +225,7 @@


<div class="version">
v2.2.0.dev0+b50290d
v2.2.0.dev0+d375d10
</div>


Expand Down Expand Up @@ -269,6 +269,8 @@
<li class="toctree-l1"><a class="reference internal" href="../user_guide/getting_started_with_fx_path.html">Torch-TensorRT (FX Frontend) User Guide</a></li>
<li class="toctree-l1"><a class="reference internal" href="../user_guide/ptq.html">Post Training Quantization (PTQ)</a></li>
<li class="toctree-l1"><a class="reference internal" href="../user_guide/runtime.html">Deploying Torch-TensorRT Programs</a></li>
<li class="toctree-l1"><a class="reference internal" href="../user_guide/saving_models.html">Saving models compiled with Torch-TensorRT</a></li>
<li class="toctree-l1"><a class="reference internal" href="../user_guide/dynamic_shapes.html">Dynamic shapes with Torch-TensorRT</a></li>
<li class="toctree-l1"><a class="reference internal" href="../user_guide/use_from_pytorch.html">Using Torch-TensorRT Directly From PyTorch</a></li>
<li class="toctree-l1"><a class="reference internal" href="../user_guide/using_dla.html">DLA</a></li>
</ul>
Expand Down Expand Up @@ -304,6 +306,7 @@
<ul>
<li class="toctree-l1"><a class="reference internal" href="../contributors/system_overview.html">System Overview</a></li>
<li class="toctree-l1"><a class="reference internal" href="../contributors/writing_converters.html">Writing Converters</a></li>
<li class="toctree-l1"><a class="reference internal" href="../contributors/writing_dynamo_aten_lowering_passes.html">Writing Dynamo ATen Lowering Passes</a></li>
<li class="toctree-l1"><a class="reference internal" href="../contributors/useful_links.html">Useful Links for Torch-TensorRT Development</a></li>
</ul>
<p class="caption" role="heading"><span class="caption-text">Indices</span></p>
Expand Down
Loading