Skip to content

Commit

Permalink
feat(jetson): Support for Jetpack 4.6
Browse files Browse the repository at this point in the history
This commit adds support for Jetpack 4.6. Users should now add the
--platforms flag to bazel compilation to target the jetpack version

e.g. `bazel build //:libtrtorch --platforms //toolchains:jetpack_4.6`

By default setup.py now expects Jetpack 4.6. To override add the
`--jetpack-version 4.5` flag

Signed-off-by: Naren Dasan <naren@narendasan.com>
Signed-off-by: Naren Dasan <narens@nvidia.com>
  • Loading branch information
narendasan committed Aug 17, 2021
1 parent 744b417 commit 9760fe3
Show file tree
Hide file tree
Showing 13 changed files with 101 additions and 34 deletions.
2 changes: 1 addition & 1 deletion core/conversion/conversionctx/ConversionCtx.h
Expand Up @@ -24,7 +24,7 @@ struct Device {
};

struct BuilderSettings {
std::set<nvinfer1::DataType> enabled_precisions = {nvinfer1::DataType::kFLOAT};
std::set<nvinfer1::DataType> enabled_precisions = {};
bool sparse_weights = false;
bool disable_tf32 = false;
bool refit = false;
Expand Down
40 changes: 16 additions & 24 deletions docsrc/tutorials/installation.rst
Expand Up @@ -237,7 +237,7 @@ Install or compile a build of PyTorch/LibTorch for aarch64

NVIDIA hosts builds the latest release branch for Jetson here:

https://forums.developer.nvidia.com/t/pytorch-for-jetson-nano-version-1-5-0-now-available/72048
https://forums.developer.nvidia.com/t/pytorch-for-jetson-version-1-9-0-now-available/72048


Enviorment Setup
Expand Down Expand Up @@ -285,29 +285,10 @@ To build natively on aarch64-linux-gnu platform, configure the ``WORKSPACE`` wit
# strip_prefix = "TensorRT-7.1.3.4"
#)
NOTE: You may also need to configure the CUDA version to 10.2 by setting the path for the cuda new_local_repository
2. Disable Python API testing dependencies:

.. code-block:: shell
#pip3_import(
# name = "trtorch_py_deps",
# requirements = "//py:requirements.txt"
#)
#load("@trtorch_py_deps//:requirements.bzl", "pip_install")
#pip_install()
#pip3_import(
# name = "py_test_deps",
# requirements = "//tests/py:requirements.txt"
#)
#load("@py_test_deps//:requirements.bzl", "pip_install")
#pip_install()
3. Configure the correct paths to directory roots containing local dependencies in the ``new_local_repository`` rules:
2. Configure the correct paths to directory roots containing local dependencies in the ``new_local_repository`` rules:

NOTE: If you installed PyTorch using a pip package, the correct path is the path to the root of the python torch package.
In the case that you installed with ``sudo pip install`` this will be ``/usr/local/lib/python3.6/dist-packages/torch``.
Expand Down Expand Up @@ -346,19 +327,30 @@ use that library, set the paths to the same path but when you compile make sure
Compile C++ Library and Compiler CLI
^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^

NOTE: Due to shifting dependency locations between Jetpack 4.5 and 4.6 there is a now a flag to inform bazel of the Jetpack version

.. code-block:: shell
--platforms //toolchains:jetpack_4.x
Compile TRTorch library using bazel command:

.. code-block:: shell
bazel build //:libtrtorch
bazel build //:libtrtorch --platforms //toolchains:jetpack_4.6
Compile Python API
^^^^^^^^^^^^^^^^^^^^

NOTE: Due to shifting dependencies locations between Jetpack 4.5 and Jetpack 4.6 there is now a flag for ``setup.py`` which sets the jetpack version (default: 4.6)

Compile the Python API using the following command from the ``//py`` directory:

.. code-block:: shell
python3 setup.py install --use-cxx11-abi
If you have a build of PyTorch that uses Pre-CXX11 ABI drop the ``--use-cxx11-abi`` flag
If you have a build of PyTorch that uses Pre-CXX11 ABI drop the ``--use-cxx11-abi`` flag

If you are building for Jetpack 4.5 add the ``--jetpack-version 4.5`` flag
25 changes: 25 additions & 0 deletions py/setup.py
Expand Up @@ -13,17 +13,35 @@
from shutil import copyfile, rmtree

import subprocess
import platform
import warnings

dir_path = os.path.dirname(os.path.realpath(__file__))

__version__ = '0.4.0a0'

CXX11_ABI = False

JETPACK_VERSION = None

if "--use-cxx11-abi" in sys.argv:
sys.argv.remove("--use-cxx11-abi")
CXX11_ABI = True

if platform.uname().processor == "aarch64":
if "--jetpack-version" in sys.argv:
version_idx = sys.argv.index("--jetpack-version") + 1
version = sys.argv[version_idx]
sys.argv.remove(version)
sys.argv.remove("--jetpack-version")
if version == "4.5":
JETPACK_VERSION = "4.5"
elif version == "4.6":
JETPACK_VERSION = "4.6"
if not JETPACK_VERSION:
warnings.warn("Assuming jetpack version to be 4.6, if not use the --jetpack-version option")
JETPACK_VERSION = "4.6"


def which(program):
import os
Expand Down Expand Up @@ -66,6 +84,13 @@ def build_libtrtorch_pre_cxx11_abi(develop=True, use_dist_dir=True, cxx11_abi=Fa
else:
print("using CXX11 ABI build")

if JETPACK_VERSION == "4.5":
cmd.append("--platforms=//toolchains:jetpack_4.5")
print("Jetpack version: 4.5")
elif JETPACK_VERSION == "4.6":
cmd.append("--platforms=//toolchains:jetpack_4.6")
print("Jetpack version: 4.6")

print("building libtrtorch")
status_code = subprocess.run(cmd).returncode

Expand Down
2 changes: 1 addition & 1 deletion py/trtorch/csrc/tensorrt_classes.h
Expand Up @@ -155,7 +155,7 @@ struct CompileSpec : torch::CustomClassHolder {

std::vector<Input> inputs;
nvinfer1::IInt8Calibrator* ptq_calibrator = nullptr;
std::set<DataType> enabled_precisions = {DataType::kFloat};
std::set<DataType> enabled_precisions = {};
bool sparse_weights = false;
bool disable_tf32 = false;
bool refit = false;
Expand Down
1 change: 1 addition & 0 deletions tests/core/conversion/evaluators/test_aten_evaluators.cpp
Expand Up @@ -3,6 +3,7 @@
#include "gtest/gtest.h"
#include "tests/util/util.h"
#include "torch/csrc/jit/ir/irparser.h"
#include "torch/torch.h"

TEST(Evaluators, DivIntEvaluatesCorrectly) {
const auto graph = R"IR(
Expand Down
1 change: 1 addition & 0 deletions tests/cpp/BUILD
Expand Up @@ -77,6 +77,7 @@ cc_test(
deps = [
":cpp_api_test",
],
timeout="long"
)

cc_test(
Expand Down
2 changes: 2 additions & 0 deletions tests/modules/hub.py
Expand Up @@ -4,6 +4,8 @@
import torchvision.models as models
import timm

torch.hub._validate_not_a_forked_repo = lambda a, b, c: True

models = {
"alexnet": {
"model": models.alexnet(pretrained=True),
Expand Down
1 change: 1 addition & 0 deletions tests/modules/requirements.txt
@@ -0,0 +1 @@
timm==v0.4.12
4 changes: 2 additions & 2 deletions tests/py/test_api_dla.py
Expand Up @@ -37,7 +37,7 @@ def test_compile_traced(self):
"dla_core": 0,
"allow_gpu_fallback": True
},
"enabled_precision": {torch.float, torch.half}
"enabled_precisions": {torch.half}
}

trt_mod = trtorch.compile(self.traced_model, compile_spec)
Expand All @@ -53,7 +53,7 @@ def test_compile_script(self):
"dla_core": 0,
"allow_gpu_fallback": True
},
"enabled_precision": {torch.float, torch.half}
"enabled_precisions": {torch.half}
}

trt_mod = trtorch.compile(self.scripted_model, compile_spec)
Expand Down
23 changes: 18 additions & 5 deletions third_party/cublas/BUILD
Expand Up @@ -3,11 +3,21 @@ package(default_visibility = ["//visibility:public"])
# NOTE: This BUILD file is only really targeted at aarch64, the rest of the configuration is just to satisfy bazel, x86 uses the cublas source from the CUDA build file since it will be versioned with CUDA.

config_setting(
name = "aarch64_linux",
name = "jetpack_4.5",
constraint_values = [
"@platforms//cpu:aarch64",
"@platforms//os:linux",
],
"@//toolchains/jetpack:4.5"
]
)

config_setting(
name = "jetpack_4.6",
constraint_values = [
"@platforms//cpu:aarch64",
"@platforms//os:linux",
"@//toolchains/jetpack:4.6"
]
)

config_setting(
Expand All @@ -20,7 +30,8 @@ config_setting(
cc_library(
name = "cublas_headers",
hdrs = select({
":aarch64_linux": ["include/cublas.h"] + glob(["usr/include/cublas+.h"]),
":jetpack_4.5": ["include/cublas.h"] + glob(["usr/include/cublas+.h"]),
":jetpack_4.6": ["local/cuda/include/cublas.h"] + glob(["usr/cuda/include/cublas+.h"]),
"//conditions:default": ["local/cuda/include/cublas.h"] + glob(["usr/cuda/include/cublas+.h"]),
}),
includes = ["include/"],
Expand All @@ -30,7 +41,8 @@ cc_library(
cc_import(
name = "cublas_lib",
shared_library = select({
":aarch64_linux": "lib/aarch64-linux-gnu/libcublas.so",
":jetpack_4.5": "lib/aarch64-linux-gnu/libcublas.so",
":jetpack_4.6": "local/cuda/targets/aarch64-linux/lib/libcublas.so",
":windows": "lib/x64/cublas.lib",
"//conditions:default": "local/cuda/targets/x86_64-linux/lib/libcublas.so",
}),
Expand All @@ -40,7 +52,8 @@ cc_import(
cc_import(
name = "cublas_lt_lib",
shared_library = select({
":aarch64_linux": "lib/aarch64-linux-gnu/libcublasLt.so",
":jetpack_4.5": "lib/aarch64-linux-gnu/libcublasLt.so",
":jetpack_4.6": "local/cuda/targets/aarch64-linux/lib/libcublasLt.so",
"//conditions:default": "local/cuda/targets/x86_64-linux/lib/libcublasLt.so",
}),
visibility = ["//visibility:private"],
Expand Down
5 changes: 4 additions & 1 deletion third_party/tensorrt/local/BUILD
Expand Up @@ -319,5 +319,8 @@ cc_library(
],
linkopts = [
"-lpthread",
]
] + select({
":aarch64_linux": ["-Wl,--no-as-needed -ldl -lrt -Wl,--as-needed"],
"//conditions:default": []
})
)
18 changes: 18 additions & 0 deletions toolchains/BUILD
Expand Up @@ -7,3 +7,21 @@ platform(
"@platforms//cpu:aarch64",
],
)

platform(
name = "jetpack_4.5",
constraint_values = [
"@platforms//os:linux",
"@platforms//cpu:aarch64",
"@//toolchains/jetpack:4.5"
]
)

platform(
name = "jetpack_4.6",
constraint_values = [
"@platforms//os:linux",
"@platforms//cpu:aarch64",
"@//toolchains/jetpack:4.6"
]
)
11 changes: 11 additions & 0 deletions toolchains/jetpack/BUILD
@@ -0,0 +1,11 @@
package(default_visibility = ["//visibility:public"])

constraint_setting(name = "jetpack")
constraint_value(
name = "4.5",
constraint_setting = ":jetpack"
)
constraint_value(
name = "4.6",
constraint_setting = ":jetpack"
)

0 comments on commit 9760fe3

Please sign in to comment.