Skip to content

Commit

Permalink
add trick from apex to cross-compile over multiple cuda architectures…
Browse files Browse the repository at this point in the history
… with docker build (#141)

add instructions for heterogeneous GPUs setup in README.md

Signed-off-by: cfujitsang <cfujitsang@nvidia.com>
  • Loading branch information
Caenorst committed Feb 3, 2020
1 parent 3d01f99 commit 1bb8b9b
Show file tree
Hide file tree
Showing 3 changed files with 25 additions and 2 deletions.
8 changes: 6 additions & 2 deletions Dockerfile
Original file line number Diff line number Diff line change
@@ -1,9 +1,13 @@
FROM nvcr.io/nvidia/pytorch:19.11-py3
FROM pytorch/pytorch:1.2-cuda10.0-cudnn7-devel

WORKDIR /kaolin
COPY . .

ENV KAOLIN_HOME "/kaolin"
ENV TORCH_CUDA_ARCH_LIST="5.2 6.0 6.1 7.0 7.5+PTX"

RUN apt-get update && \
apt-get install -y vim

RUN pip install -r requirements.txt && \
python setup.py install
python setup.py develop
7 changes: 7 additions & 0 deletions README.md
Original file line number Diff line number Diff line change
Expand Up @@ -94,6 +94,13 @@ Note, if modifying or adding Cython files, ensure that Cython is installed and s
During installation, the *packman* package manager will
download the nv-usd package to `~/packman-repo/` containing the necessary packages for reading and writing Universal Scene Description (USD) files.

Note, if you are using an heterogeneous GPUs setup set the architectures for which you want to compile the cuda code using the environment variable.

Example:
```sh
$ export TORCH_CUDA_ARCH_LIST="7.0 7.5"
```

### Verify installation

To verify that `kaolin` has been installed, fire up your python interpreter, and execute the following commands.
Expand Down
12 changes: 12 additions & 0 deletions setup.py
Original file line number Diff line number Diff line change
Expand Up @@ -10,6 +10,18 @@

cwd = os.path.dirname(os.path.abspath(__file__))

if not torch.cuda.is_available():
# From: https://github.com/NVIDIA/apex/blob/b66ffc1d952d0b20d6706ada783ae5b23e4ee734/setup.py
# Extension builds after https://github.com/pytorch/pytorch/pull/23408 attempt to query torch.cuda.get_device_capability(),
# which will fail if you are compiling in an environment without visible GPUs (e.g. during an nvidia-docker build command).
print('\nWarning: Torch did not find available GPUs on this system.\n',
'If your intention is to cross-compile, this is not an error.\n'
'By default, Kaolin will cross-compile for Pascal (compute capabilities 6.0, 6.1, 6.2),\n'
'Volta (compute capability 7.0), and Turing (compute capability 7.5).\n'
'If you wish to cross-compile for a single specific architecture,\n'
'export TORCH_CUDA_ARCH_LIST="compute capability" before running setup.py.\n')
if os.environ.get("TORCH_CUDA_ARCH_LIST", None) is None:
os.environ["TORCH_CUDA_ARCH_LIST"] = "6.0;6.1;6.2;7.0;7.5"

PACKAGE_NAME = 'kaolin'
VERSION = '0.1.0'
Expand Down

0 comments on commit 1bb8b9b

Please sign in to comment.