Skip to content
This repository has been archived by the owner on Nov 17, 2023. It is now read-only.

Commit

Permalink
[v1.x] Port CICD changes (#21123, #21126 and #21128) from v1.9.x (#21129
Browse files Browse the repository at this point in the history
)

* Refactor CD to support newer cuda versions (11.0-11.7) (#21123)

* WIP to add cuda build versions.

* WIP to add cuda build versions.

* Remove sudo install, moved to CD specific dockerfile.

* Allow passing of linker flags for Distribution build type.

* Update distribution cmake configs, add new configs for newer cuda versions.

* Update cuda versions to build in CD.

* Update base images for GPU, add new Cuda 11.6 container.

* Correctly set LD_LIBRARY_PATH.

* Provide cmake hints in dependency install scripts.

* Refactor Cuda dependency installation to simplify and support newer versions.

* Add new Dockerfile for CD builds.

* Use new CD-specific container for building MXNet static library.

* Add Cuda verions.

* Upgrade to Python 3.8 in CentOS 7 containers.

* Update base images.

* Install requirements only if file exists.

* Clean up dockerfile

* Do not pin Cython, relax scipy version.

* Install all build dependencies.

* Add documentation.

* Add documentation.

* Set LD_LIBRARY_PATH to include stubs.

* Build cmake from source for portability.

* Install hdf5 headers during python install, as it is required for h5py module.

* Install any dependencies via yum for cmake build.

* Update libtiff to version that builds on aarch64.

* Build libtiff and protobuf from source so we can statically link mxnet on aarch64.

* Change centos7_aarch64_cpu container to install software using common scripts for consistency. Remove installing protobuf and other depedency libraries so we properly statically link to them.

* Install pre-built cmake packages.

* Use common method to install cmake.

* Update pipelines to use supported cuda versions for static build tests.

* Ensure required build tools are installed.

* Install required headers for building all R packages.

* Add/update make configs for newer Cuda versions.

* Install gfortran as build dependency in CD image.

* Use ldd to find actual path of dynamically linked libraries instead of guessing.

* Add additional Cuda versions for CI testing.

* Set minimum OSX version to support via C/CXXFLAGS to match what we build MXNet for.

* Don't specify minimum OS version when building MXNet for OSX.

* Turns out we can set target OSX version, but recently libtiff introduced zstd support, which doesn't link properly. Disabling support via --disable-zstd works.

* Disable zlib, as it was previously.

* Disable webp support in libtiff (present only in newer version)

* [v1.9.x] Refactor dockerfiles in CI, migrate some ubuntu docker containers to use docker-compose. Update CI to use Cuda 11.7 (#21126)

* Remove deprecated dockerfiles.

* Update documentation to use different image.

* Install Scala in centos7 CD container and build tools.

* Update static scala build to use CD container, change julia container.

* Removed deprecated Jenkins pipeline files, remove old disabled build steps.

* Add new base Dockerfile for docker-compose.

* Migrate ubuntu cuda containers to docker-compose.

* Build python from source on ubuntu for portability.

* Remove old dockerfiles, upgrade nightly gpu image to cuda 11.7.

* Remove Cuda versions from runtime function names to simplify.

* Update Jenkins pipelines to use newer Cuda containers.

* Install LLVM before TVM.

* Fix ubuntu TVM install script (was failing but returning true.)

* Move cmake install into unified script.

* Move cmake install for ubuntu into centralized script.

* Update cudnn version passed to builds.

* Consolidate installation of packages for efficiency.

* Remove unused containers from docker-compose config.

* Fix pylint.

* Set LD_LIBRARY_PATH on ubuntu_gpu images to find libcuda.so.

* Set CUB_IGNORE_DEPRECATED_CPP_DIALECT to prevent build failures with gcc-4.8 + Cuda 11.7.

* Install sqlite headers/library before building python on ubuntu.

* Revert "Remove unused containers from docker-compose config."

This reverts commit 5de82df.

* Revert "Set CUB_IGNORE_DEPRECATED_CPP_DIALECT to prevent build failures with gcc-4.8 + Cuda 11.7."

This reverts commit e649660.

* Allow building CUB with c++11 to prevent failures on newer cuda versions.

* Set variable only on gpu make builds.

* Use docker-compose to also build ubuntu_cpu image.

* We no longer need to enable python3.8 on aarch64 since we are building from source now.

* Add Cuda 11.1 and 11.3 centos7 images which is used by CD testing phase.

* Don't install python-opencv, we are installing the module via pip instead.

* Change Makefile to set CUB_IGNORE_DEPRECATED_CPP_DIALECT when using Cuda, not only for < 11.0.

* Don't pin down h5py (old versions do not work on aarch64.)

* Conditionally install different versions of h5py dependending on architecture.

* Fix value for platform_machine.

* Don't install h5py on aarch64 at all.

* Set USE_LAPATH_PATH to correct path on ubuntu 18.04.

* Rearrange dockerfiles to build more efficiently when small changes occur. Split python install into 2 steps: building python and install requirements.

* Since we are not using multi-stage builds, do not specify target to ensure docker cache works as expected.

* When building docker-compose based containers, pull the latest version for caching before building.

* When pulling docker-compose images, pass quiet option to squell CI logs.

* When pulling docker-compose images, pass quiet option to squell CI logs.

* Clean up docker cache build code.

* [v1.9.x] Restore Cuda 10.x CD builds (#21128)

* Create Dockerfile for ubuntu CD, add ccache, install cuda repos in base container instead of adding dynamically and requiring more sudo permissions.

* Prevent hanging for user input on package installation.

* Update build configs for cuda 10.0, 10.1 and 10.2 to work with centos7 CD.

* Update links to other versions to include all supported cuda releases.

* Update supported cuda version list.

* Add back support for cuda 10.x, change installation design to require cuda repos to be already setup and accessible in the base containers for simplicity.

* Use correct script name for installing ccache.

* No need to use non-exact matches for variants.

* Standardize name for ccache installation script.

* Update ccache version and clean up install scripts.

* Install libtool in ubuntu CD container.

* Restore Cuda 10.x builds for CD.

* Dynamically determine which dockerfiles are used by docker-compose (instead of having a hard-coded list) so docker cache refresh will finish successfully.

* Remove debug line.

* Define python executable path for tensorrt build.

* Remove old hacks for changing permissions to /usr/local/bin.

* Install libtool in ubuntu r container.

* Update permissions to allow CI tasks to run.

* Recursively set permissions on deps directory.
  • Loading branch information
josephevans committed Aug 26, 2022
1 parent 702e475 commit e2ed553
Show file tree
Hide file tree
Showing 121 changed files with 2,324 additions and 1,887 deletions.
4 changes: 2 additions & 2 deletions CMakeLists.txt
Original file line number Diff line number Diff line change
Expand Up @@ -147,8 +147,8 @@ if(CMAKE_BUILD_TYPE STREQUAL "Distribution" AND UNIX AND NOT APPLE)
set(CMAKE_BUILD_WITH_INSTALL_RPATH ON)
set(CMAKE_INSTALL_RPATH $\{ORIGIN\})
# Enforce DT_PATH instead of DT_RUNPATH
set(CMAKE_SHARED_LINKER_FLAGS "-Wl,--disable-new-dtags")
set(CMAKE_EXE_LINKER_FLAGS "-Wl,--disable-new-dtags")
set(CMAKE_SHARED_LINKER_FLAGS "${CMAKE_SHARED_LINKER_FLAGS} -Wl,--disable-new-dtags")
set(CMAKE_EXE_LINKER_FLAGS "${CMAKE_EXE_LINKER_FLAGS} -Wl,--disable-new-dtags")
set(Protobuf_USE_STATIC_LIBS ON)
endif()

Expand Down
3 changes: 2 additions & 1 deletion Makefile
Original file line number Diff line number Diff line change
Expand Up @@ -579,8 +579,9 @@ ALL_DEP = $(OBJ) $(EXTRA_OBJ) $(PLUGIN_OBJ) $(LIB_DEP)

ifeq ($(USE_CUDA), 1)
CUDA_VERSION_MAJOR := $(shell $(NVCC) --version | grep "release" | awk '{print $$6}' | cut -c2- | cut -d '.' -f1)
CFLAGS += -DCUB_IGNORE_DEPRECATED_CPP_DIALECT
ifeq ($(shell test $(CUDA_VERSION_MAJOR) -lt 11; echo $$?), 0)
CFLAGS += -I$(ROOTDIR)/3rdparty/nvidia_cub -DCUB_IGNORE_DEPRECATED_CPP_DIALECT
CFLAGS += -I$(ROOTDIR)/3rdparty/nvidia_cub
endif

ALL_DEP += $(CUOBJ) $(EXTRA_CUOBJ) $(PLUGIN_CUOBJ)
Expand Down
2 changes: 1 addition & 1 deletion cd/Jenkinsfile_cd_pipeline
Original file line number Diff line number Diff line change
Expand Up @@ -36,7 +36,7 @@ pipeline {

parameters {
// Release parameters
string(defaultValue: "cpu,native,cu100,cu101,cu102,cu110,cu112,aarch64_cpu", description: "Comma separated list of variants", name: "MXNET_VARIANTS")
string(defaultValue: "cpu,native,cu100,cu102,cu110,cu111,cu112,cu113,cu114,cu115,cu116,cu117,aarch64_cpu", description: "Comma separated list of variants", name: "MXNET_VARIANTS")
booleanParam(defaultValue: false, description: 'Whether this is a release build or not', name: "RELEASE_BUILD")
}

Expand Down
2 changes: 1 addition & 1 deletion cd/Jenkinsfile_release_job
Original file line number Diff line number Diff line change
Expand Up @@ -43,7 +43,7 @@ pipeline {
// any disruption caused by different COMMIT_ID values chaning the job parameter configuration on
// Jenkins.
string(defaultValue: "mxnet_lib", description: "Pipeline to build", name: "RELEASE_JOB_TYPE")
string(defaultValue: "cpu,native,cu100,cu101,cu102,cu110,cu112,aarch64_cpu", description: "Comma separated list of variants", name: "MXNET_VARIANTS")
string(defaultValue: "cpu,native,cu100,cu102,cu110,cu111,cu112,cu113,cu114,cu115,cu116,cu117,aarch64_cpu", description: "Comma separated list of variants", name: "MXNET_VARIANTS")
booleanParam(defaultValue: false, description: 'Whether this is a release build or not', name: "RELEASE_BUILD")
string(defaultValue: "nightly_v1.x", description: "String used for naming docker images", name: "VERSION")
}
Expand Down
4 changes: 1 addition & 3 deletions cd/mxnet_lib/Jenkins_pipeline.groovy
Original file line number Diff line number Diff line change
Expand Up @@ -55,9 +55,7 @@ def build(mxnet_variant) {
node(NODE_LINUX_CPU) {
ws("workspace/mxnet_${libtype}/${mxnet_variant}/${env.BUILD_NUMBER}") {
ci_utils.init_git()
// Compiling in Ubuntu14.04 due to glibc issues.
// This should be updates once we have clarity on this issue.
ci_utils.docker_run('centos7_cpu', "build_static_libmxnet ${mxnet_variant}", false)
ci_utils.docker_run('centos7_cd', "build_static_libmxnet ${mxnet_variant}", false)
ci_utils.pack_lib("mxnet_${mxnet_variant}", libmxnet_pipeline.get_stash(mxnet_variant))
}
}
Expand Down
32 changes: 25 additions & 7 deletions cd/utils/mxnet_base_image.sh
Original file line number Diff line number Diff line change
Expand Up @@ -21,20 +21,38 @@
mxnet_variant=${1:?"Please specify the mxnet variant as the first parameter"}

case ${mxnet_variant} in
cu100*)
cu100)
echo "nvidia/cuda:10.0-cudnn7-runtime-ubuntu18.04"
;;
cu101*)
cu101)
echo "nvidia/cuda:10.1-cudnn7-runtime-ubuntu18.04"
;;
cu102*)
cu102)
echo "nvidia/cuda:10.2-cudnn8-runtime-ubuntu18.04"
;;
cu110*)
echo "nvidia/cuda:11.0-cudnn8-runtime-ubuntu18.04"
cu110)
echo "nvidia/cuda:11.0.3-cudnn8-runtime-ubuntu18.04"
;;
cu112*)
echo "nvidia/cuda:11.2.1-cudnn8-runtime-ubuntu18.04"
cu111)
echo "nvidia/cuda:11.1.1-cudnn8-runtime-ubuntu18.04"
;;
cu112)
echo "nvidia/cuda:11.2.2-cudnn8-runtime-ubuntu18.04"
;;
cu113)
echo "nvidia/cuda:11.3.1-cudnn8-runtime-ubuntu18.04"
;;
cu114)
echo "nvidia/cuda:11.4.3-cudnn8-runtime-ubuntu18.04"
;;
cu115)
echo "nvidia/cuda:11.5.2-cudnn8-runtime-ubuntu18.04"
;;
cu116)
echo "nvidia/cuda:11.6.2-cudnn8-runtime-ubuntu18.04"
;;
cu117)
echo "nvidia/cuda:11.7.1-cudnn8-runtime-ubuntu18.04"
;;
cpu)
echo "ubuntu:18.04"
Expand Down
32 changes: 17 additions & 15 deletions ci/build.py
Original file line number Diff line number Diff line change
Expand Up @@ -43,11 +43,8 @@

from util import *

# Files for docker compose
DOCKER_COMPOSE_FILES = set(['docker/build.centos7'])

# keywords to identify arm-based dockerfiles
AARCH_FILE_KEYWORDS = ['aarch64']
AARCH_FILE_KEYWORDS = ['aarch64', 'armv']

def get_dockerfiles_path():
return "docker"
Expand All @@ -60,13 +57,20 @@ def get_docker_compose_platforms(path: str = get_dockerfiles_path()):
platforms.add(platform)
return platforms

def get_docker_compose_dockerfiles(path: str = get_dockerfiles_path()):
dockerfiles = set()
with open(os.path.join(path, "docker-compose.yml"), "r") as f:
compose_config = yaml.load(f.read(), yaml.SafeLoader)
for platform in compose_config["services"]:
dockerfiles.add("docker/" + compose_config['services'][platform]['build']['dockerfile'])
return dockerfiles

def get_platforms(path: str = get_dockerfiles_path(), arch=machine()) -> List[str]:
"""Get a list of platforms given our dockerfiles"""
dockerfiles = glob.glob(os.path.join(path, "Dockerfile.*"))
dockerfiles = set(filter(lambda x: x[-1] != '~', dockerfiles))
dockerfiles = dockerfiles - get_docker_compose_dockerfiles()
files = set(map(lambda x: re.sub(r"Dockerfile.(.*)", r"\1", x), dockerfiles))
files = files - DOCKER_COMPOSE_FILES
files.update(["build."+x for x in get_docker_compose_platforms()])
arm_files = set(filter(lambda x: any(y in x for y in AARCH_FILE_KEYWORDS), files))
if arch == 'x86_64':
Expand Down Expand Up @@ -187,11 +191,11 @@ def build_docker(platform: str, registry: str, num_retries: int, no_cache: bool,
env["DOCKER_CACHE_REGISTRY"] = registry

@retry(subprocess.CalledProcessError, tries=num_retries)
def run_cmd(env=None):
logging.info("Running command: '%s'", ' '.join(cmd))
check_call(cmd, env=env)
def run_cmd(c, e):
logging.info("Running command: '%s'", ' '.join(c))
check_call(c, env=e)

run_cmd(env=env)
run_cmd(cmd, env)

# Get image id by reading the tag. It's guaranteed (except race condition) that the tag exists. Otherwise, the
# check_call would have failed
Expand Down Expand Up @@ -308,23 +312,21 @@ def list_platforms(arch=machine()) -> str:
def load_docker_cache(platform, tag, docker_registry) -> None:
"""Imports tagged container from the given docker registry"""
if docker_registry:
env = os.environ.copy()
env["DOCKER_CACHE_REGISTRY"] = docker_registry
if is_docker_compose(platform):
docker_compose_platform = platform.split(".")[1] if any(x in platform for x in ['build.', 'publish.']) else platform
env = os.environ.copy()
env["DOCKER_CACHE_REGISTRY"] = docker_registry
if "dkr.ecr" in docker_registry:
try:
import docker_cache
docker_cache._ecr_login(docker_registry)
except Exception:
logging.exception('Unable to login to ECR...')
cmd = ['docker-compose', '-f', 'docker/docker-compose.yml', 'pull', docker_compose_platform]
logging.info("Running command: 'DOCKER_CACHE_REGISTRY=%s %s'", docker_registry, ' '.join(cmd))
cmd = ['docker-compose', '-f', 'docker/docker-compose.yml', 'pull', '--quiet', docker_compose_platform]
logging.info("Running command: '%s'", ' '.join(cmd))
check_call(cmd, env=env)
return

env = os.environ.copy()
env["DOCKER_CACHE_REGISTRY"] = docker_registry
# noinspection PyBroadException
try:
import docker_cache
Expand Down
2 changes: 1 addition & 1 deletion ci/dev_menu.py
Original file line number Diff line number Diff line change
Expand Up @@ -134,7 +134,7 @@ def provision_virtualenv(venv_path=DEFAULT_PYENV):
('[Docker] Build the Java API docs - outputs to "docs/scala-package/build/docs/java"',
"ci/build.py --platform ubuntu_cpu_scala /work/runtime_functions.sh build_java_docs"),
('[Docker] Build the Julia API docs - outputs to "julia/docs/site/"',
"ci/build.py --platform ubuntu_cpu_julia /work/runtime_functions.sh build_julia_docs"),
"ci/build.py --platform ubuntu_cpu /work/runtime_functions.sh build_julia_docs"),
('[Docker] Build the R API docs - outputs to "R-package/build/mxnet-r-reference-manual.pdf"',
"ci/build.py --platform ubuntu_cpu_r /work/runtime_functions.sh build_r_docs"),
('[Docker] Build the Scala API docs - outputs to "scala-package/docs/build/docs/scala"',
Expand Down
14 changes: 11 additions & 3 deletions ci/docker/Dockerfile.build.centos7
Original file line number Diff line number Diff line change
Expand Up @@ -31,19 +31,27 @@
# "--target" option or docker-compose.yml
####################################################################################################
ARG BASE_IMAGE
FROM $BASE_IMAGE AS base
FROM $BASE_IMAGE

WORKDIR /work/deps

COPY install/centos7_core.sh /work/
RUN /work/centos7_core.sh

COPY install/centos7_cmake.sh /work/
RUN /work/centos7_cmake.sh

COPY install/centos7_ccache.sh /work/
RUN /work/centos7_ccache.sh
COPY install/centos7_python.sh /work/
RUN /work/centos7_python.sh

COPY install/centos7_scala.sh /work/
RUN /work/centos7_scala.sh

COPY install/centos7_python.sh /work/
RUN /work/centos7_python.sh
COPY install/requirements /work/
RUN pip3 install -r /work/requirements

ARG USER_ID=0
COPY install/centos7_adduser.sh /work/
RUN /work/centos7_adduser.sh
Expand Down
51 changes: 13 additions & 38 deletions ci/docker/Dockerfile.build.centos7_aarch64_cpu
Original file line number Diff line number Diff line change
Expand Up @@ -19,7 +19,7 @@
# Dockerfile for CentOS 7 AArch64 CPU build.
# Via the CentOS 7 Dockerfiles, we ensure MXNet continues to run fine on older systems.

FROM arm64v8/centos:7
FROM centos:7

WORKDIR /work/deps

Expand All @@ -39,47 +39,24 @@ RUN yum -y check-update || true && \
automake \
autoconf \
libtool \
protobuf-compiler \
protobuf-devel \
# CentOS Software Collections https://www.softwarecollections.org
devtoolset-10 \
devtoolset-10-gcc \
devtoolset-10-gcc-c++ \
devtoolset-10-gcc-gfortran \
rh-python38 \
rh-python38-python-numpy \
rh-python38-python-scipy \
# Libraries
opencv-devel \
openssl-devel \
zeromq-devel \
# Build-dependencies for ccache 3.7.9
gperf \
libb2-devel \
libzstd-devel && \
hdf5-devel && \
yum clean all

# Make Red Hat Developer Toolset 10.0 and Python 3.8 Software Collections available by default
# Make Red Hat Developer Toolset 10.0 Software Collection available by default
# during the following build steps in this Dockerfile
SHELL [ "/usr/bin/scl", "enable", "devtoolset-10", "rh-python38" ]
SHELL [ "/usr/bin/scl", "enable", "devtoolset-10" ]

# Install minimum required cmake version
RUN cd /usr/local/src && \
wget -nv https://cmake.org/files/v3.20/cmake-3.20.5-linux-aarch64.sh && \
sh cmake-3.20.5-linux-aarch64.sh --prefix=/usr/local --skip-license && \
rm cmake-3.20.5-linux-aarch64.sh
# Fix the en_DK.UTF-8 locale to test locale invariance
RUN localedef -i en_DK -f UTF-8 en_DK.UTF-8

# ccache 3.7.9 has fixes for caching nvcc outputs
RUN cd /usr/local/src && \
git clone --recursive https://github.com/ccache/ccache.git && \
cd ccache && \
git checkout v3.7.9 && \
./autogen.sh && \
./configure --disable-man && \
make -j$(nproc) && \
make install && \
cd /usr/local/src && \
rm -rf ccache
COPY install/centos7_cmake.sh /work/
RUN /work/centos7_cmake.sh

# Arm Performance Libraries 21.0
RUN cd /usr/local/src && \
Expand All @@ -89,13 +66,11 @@ RUN cd /usr/local/src && \
rm -rf arm-performance-libraries_21.0_RHEL-7_gcc-8.2.tar arm-performance-libraries_21.0_RHEL-7_gcc-8.2
ENV LD_LIBRARY_PATH=${LD_LIBRARY_PATH}:/opt/arm/armpl_21.0_gcc-8.2/lib

# Fix the en_DK.UTF-8 locale to test locale invariance
RUN localedef -i en_DK -f UTF-8 en_DK.UTF-8

# Python dependencies
RUN python3 -m pip install --upgrade pip
COPY install/requirements_aarch64 /work/
RUN python3 -m pip install -r /work/requirements_aarch64
# Install Python and dependency packages
COPY install/centos7_python.sh /work/
RUN /work/centos7_python.sh
COPY install/requirements /work/
RUN pip3 install -r /work/requirements

ARG USER_ID=0
COPY install/centos7_adduser.sh /work/
Expand Down
61 changes: 61 additions & 0 deletions ci/docker/Dockerfile.build.centos7_cd
Original file line number Diff line number Diff line change
@@ -0,0 +1,61 @@
# -*- mode: dockerfile -*-
# Licensed to the Apache Software Foundation (ASF) under one
# or more contributor license agreements. See the NOTICE file
# distributed with this work for additional information
# regarding copyright ownership. The ASF licenses this file
# to you under the Apache License, Version 2.0 (the
# "License"); you may not use this file except in compliance
# with the License. You may obtain a copy of the License at
#
# http://www.apache.org/licenses/LICENSE-2.0
#
# Unless required by applicable law or agreed to in writing,
# software distributed under the License is distributed on an
# "AS IS" BASIS, WITHOUT WARRANTIES OR CONDITIONS OF ANY
# KIND, either express or implied. See the License for the
# specific language governing permissions and limitations
# under the License.
#
# Dockerfile to build and run MXNet on CentOS 7 for CPU

FROM centos:7

WORKDIR /work/deps

COPY install/centos7_base.sh /work/
RUN /work/centos7_base.sh

COPY install/centos7_scala.sh /work/
RUN /work/centos7_scala.sh

# Install cmake
COPY install/centos7_cmake.sh /work/
RUN /work/centos7_cmake.sh

COPY install/centos7_ccache.sh /work/
RUN /work/centos7_ccache.sh

# Install tools for static dependency builds
RUN yum install -y sudo patchelf nasm automake libtool file gcc-c++ gcc gcc-gfortran which

# Allow jenkins user to use sudo for installing cuda libraries
RUN echo "jenkins_slave ALL=(root) NOPASSWD: /usr/bin/yum" >> /etc/sudoers.d/10_jenkins_slave

COPY install/centos7_python.sh /work/
RUN /work/centos7_python.sh
COPY install/requirements /work/
RUN pip3 install -r /work/requirements

ARG USER_ID=0
COPY install/centos7_adduser.sh /work/
RUN /work/centos7_adduser.sh

ENV PYTHONPATH=./python/
WORKDIR /work/mxnet

# setup cuda repos
RUN yum-config-manager --add-repo https://developer.download.nvidia.com/compute/cuda/repos/rhel7/$(uname -m)/cuda-rhel7.repo && \
rpm --import http://developer.download.nvidia.com/compute/machine-learning/repos/rhel7/$(uname -m)/7fa2af80.pub && \
yum-config-manager --add-repo https://developer.download.nvidia.com/compute/machine-learning/repos/rhel7/$(uname -m)

COPY runtime_functions.sh /work/
9 changes: 3 additions & 6 deletions ci/docker/Dockerfile.build.jetson
Original file line number Diff line number Diff line change
Expand Up @@ -43,12 +43,9 @@ RUN apt-get update && DEBIAN_FRONTEND=noninteractive apt-get install -y \
crossbuild-essential-arm64 \
&& rm -rf /var/lib/apt/lists/*

# cmake on Ubuntu 18.04 is too old
RUN python3 -m pip install cmake

# ccache on Ubuntu 18.04 is too old to support Cuda correctly
COPY install/deb_ubuntu_ccache.sh /work/
RUN /work/deb_ubuntu_ccache.sh
# Install cmake
COPY install/ubuntu_cmake.sh /work/
RUN /work/ubuntu_cmake.sh

COPY toolchains/aarch64-linux-gnu-toolchain.cmake /usr
ENV CMAKE_TOOLCHAIN_FILE=/usr/aarch64-linux-gnu-toolchain.cmake
Expand Down
Loading

0 comments on commit e2ed553

Please sign in to comment.