Skip to content
Merged
Show file tree
Hide file tree
Changes from all commits
Commits
Show all changes
101 commits
Select commit Hold shift + click to select a range
1800843
first changes after merge
sandeepgupta12 Jun 30, 2025
ef50727
first changes after merge
sandeepgupta12 Jun 30, 2025
fb19a5e
first changes after merge
sandeepgupta12 Jun 30, 2025
1c7544f
using p9 runner
sandeepgupta12 Jun 30, 2025
9087df0
added flag for linker error
sandeepgupta12 Jun 30, 2025
009bb0b
removed memory flag
sandeepgupta12 Jun 30, 2025
b0f7497
fixed docker image step
sandeepgupta12 Jul 1, 2025
131242c
using staging runner
sandeepgupta12 Jul 7, 2025
81a6aa2
using staging runner
sandeepgupta12 Jul 7, 2025
8972274
using aa name
sandeepgupta12 Jul 7, 2025
50b657b
autoconf fix
sandeepgupta12 Jul 8, 2025
022dd2d
using prod runner
sandeepgupta12 Jul 8, 2025
730f55f
using 4x runner
sandeepgupta12 Jul 10, 2025
00ed105
using 2x runner
sandeepgupta12 Jul 15, 2025
fbcf55b
added yml file for ppc docker image
sandeepgupta12 Jul 16, 2025
61ae644
using normal runner
sandeepgupta12 Jul 16, 2025
e06b21c
using normal runner minimal dockerfile
sandeepgupta12 Jul 16, 2025
3d43232
using normal runner minimal dockerfile1
sandeepgupta12 Jul 16, 2025
880c313
using normal runner minimal dockerfile2
sandeepgupta12 Jul 16, 2025
fd9face
using normal runner minimal dockerfile2
sandeepgupta12 Jul 16, 2025
34a91e6
using normal runner minimal dockerfile3
sandeepgupta12 Jul 16, 2025
eaa5cd0
using normal runner minimal dockerfile4
sandeepgupta12 Jul 16, 2025
db5231a
using normal runner minimal dockerfile5
sandeepgupta12 Jul 17, 2025
b057381
using normal runner minimal dockerfile6
sandeepgupta12 Jul 17, 2025
7c601b8
using normal runner minimal dockerfile7
sandeepgupta12 Jul 17, 2025
3c7df6e
using normal runner minimal dockerfile8
sandeepgupta12 Jul 17, 2025
a4c93f8
using normal runner minimal dockerfile9
sandeepgupta12 Jul 17, 2025
b306f4b
no cache docker 1
sandeepgupta12 Jul 17, 2025
24ab7c5
no cache docker 1
sandeepgupta12 Jul 17, 2025
c163b83
stagging 1
sandeepgupta12 Jul 18, 2025
19d7ec0
using 2xlarge
sandeepgupta12 Jul 18, 2025
07f58cc
using 4xlarge
sandeepgupta12 Jul 18, 2025
5358083
using 2xlarge
sandeepgupta12 Jul 18, 2025
10c84ca
using 2xlarge
sandeepgupta12 Jul 18, 2025
4e047b8
using 2xlarge
sandeepgupta12 Jul 18, 2025
97cace1
using 2xlarge
sandeepgupta12 Jul 18, 2025
b4e5d11
using 4x runner
sandeepgupta12 Jul 23, 2025
07efae6
using network flag 4x large
sandeepgupta12 Jul 23, 2025
e1283bb
using 4x large
sandeepgupta12 Jul 23, 2025
5c8f715
sync with master brnach
sandeepgupta12 Jul 28, 2025
d0f8740
add pip install wheel in dockerfile
sandeepgupta12 Jul 28, 2025
8509185
fixed both workflow
sandeepgupta12 Jul 28, 2025
426aa83
using cmake 3.28
sandeepgupta12 Jul 28, 2025
9a0e8a7
added requirement-build.txt
sandeepgupta12 Jul 28, 2025
be40f39
using cmake 3.28
sandeepgupta12 Jul 28, 2025
0908c01
using reauirement file
sandeepgupta12 Jul 29, 2025
fe8f3a9
using reauirement file
sandeepgupta12 Jul 29, 2025
d3a64df
pyyaml solution
sandeepgupta12 Jul 29, 2025
bd4d25e
pyyaml solution 2
sandeepgupta12 Jul 29, 2025
e8750d2
cake error
sandeepgupta12 Jul 30, 2025
8f08ace
cake error 2
sandeepgupta12 Jul 30, 2025
d307933
updated build script and cokerfile
sandeepgupta12 Jul 30, 2025
239ac16
removed cpython 3.14
sandeepgupta12 Jul 30, 2025
cd36be6
Merge branch 'pytorch:main' into temp-gha-runner-v3
sandeepgupta12 Jul 30, 2025
c7e63c4
using s390 solution
sandeepgupta12 Jul 30, 2025
ee0db2a
using s390 solution
sandeepgupta12 Jul 30, 2025
d982297
using s390 solution
sandeepgupta12 Jul 30, 2025
9d129bc
Merge branch 'temp-gha-runner-v3' of https://github.com/sandeepgupta1…
sandeepgupta12 Jul 30, 2025
788e64d
cpu error
sandeepgupta12 Jul 30, 2025
b59b99b
vec error
sandeepgupta12 Jul 30, 2025
6c6c267
Merge branch 'pytorch:main' into temp-gha-runner-v3
sandeepgupta12 Jul 31, 2025
c9cfcfd
ld path
sandeepgupta12 Jul 31, 2025
30f8d2b
ld path and old linker off
sandeepgupta12 Jul 31, 2025
ee92a60
ld path and old linker off
sandeepgupta12 Jul 31, 2025
2df04ca
using old patch for vec error
sandeepgupta12 Aug 1, 2025
952a167
using 4x runner for ppc
sandeepgupta12 Aug 1, 2025
f61dd36
using 4x runner for ppc 2
sandeepgupta12 Aug 1, 2025
0ca5ca7
USE_MKLDNN 0 in build.sh
sandeepgupta12 Aug 1, 2025
7d01bf2
Merge branch 'pytorch:main' into temp-gha-runner-v3
sandeepgupta12 Aug 4, 2025
609e852
using devtool 13
sandeepgupta12 Aug 4, 2025
fc7e568
using devtool 13
sandeepgupta12 Aug 4, 2025
02abdcc
without werror
sandeepgupta12 Aug 4, 2025
2f1586c
using devtool 14
sandeepgupta12 Aug 4, 2025
9460072
Merge branch 'pytorch:main' into temp-gha-runner-v3
sandeepgupta12 Aug 4, 2025
a8f17ad
using devtool 14
sandeepgupta12 Aug 4, 2025
2ffafc8
Merge branch 'temp-gha-runner-v3' of https://github.com/sandeepgupta1…
sandeepgupta12 Aug 4, 2025
7b5b444
skipped autoconf
sandeepgupta12 Aug 4, 2025
93b8783
added test job in nightly build
sandeepgupta12 Aug 5, 2025
22853f7
second job changed tag of docker image
sandeepgupta12 Aug 5, 2025
38f4390
docker mage job
sandeepgupta12 Aug 5, 2025
13ab1d9
docker mage job
sandeepgupta12 Aug 5, 2025
3841a98
fixed test script error
sandeepgupta12 Aug 5, 2025
dacdf3b
fixed test script error
sandeepgupta12 Aug 5, 2025
c52ea3c
added 3.10 job
sandeepgupta12 Aug 6, 2025
dc17075
added 3.10 job
sandeepgupta12 Aug 6, 2025
218bad4
added 3.10 and 3.11 job
sandeepgupta12 Aug 6, 2025
9812c80
added 3.10 and 3.12 job
sandeepgupta12 Aug 6, 2025
a06d8a8
added 3.13 and 3.14 job
sandeepgupta12 Aug 6, 2025
df96e4c
chore: dummy commit to trigger CI
sandeepgupta12 Sep 16, 2025
dd1160c
using normal runner
sandeepgupta12 Sep 16, 2025
c2e9d00
using normal runner for ppc
sandeepgupta12 Sep 16, 2025
677b7f6
ci: trigger build
sandeepgupta12 Nov 5, 2025
e639b82
running parallel jobs
sandeepgupta12 Nov 6, 2025
805d02f
running parallel jobs for all python version for nightly builds
sandeepgupta12 Nov 18, 2025
8282daf
running parallel jobs for all python version for nightly builds
sandeepgupta12 Nov 18, 2025
28831f8
running jobs with 4x large runner
sandeepgupta12 Nov 18, 2025
5e09e41
running all parallel jobs
sandeepgupta12 Nov 19, 2025
6572072
running all parallel jobs on p10 runner
sandeepgupta12 Nov 19, 2025
184664d
running all parallel jobs on p10 runner
sandeepgupta12 Nov 19, 2025
5e98a5c
running jobs on 4xlarge runner
sandeepgupta12 Nov 19, 2025
6736267
Merge branch 'temp-gha-runner-v4' into temp-gha-runner-v3
sandeepgupta12 Dec 1, 2025
File filter

Filter by extension

Filter by extension


Conversations
Failed to load comments.
Loading
Jump to
Jump to file
Failed to load files.
Loading
Diff view
Diff view
130 changes: 130 additions & 0 deletions .ci/docker/manywheel/Dockerfile_ppc64le
Original file line number Diff line number Diff line change
@@ -0,0 +1,130 @@
FROM quay.io/pypa/manylinux_2_28_ppc64le as base

# Language variables
ENV LC_ALL=C.UTF-8
ENV LANG=C.UTF-8
ENV LANGUAGE=C.UTF-8

# there is a bugfix in gcc >= 14 for precompiled headers and s390x vectorization interaction.
# with earlier gcc versions test/inductor/test_cpu_cpp_wrapper.py will fail.
ARG DEVTOOLSET_VERSION=14
# Installed needed OS packages. This is to support all
# the binary builds (torch, vision, audio, text, data)
RUN yum -y install epel-release
RUN yum -y update
RUN yum install -y \
sudo \
autoconf \
automake \
bison \
bzip2 \
curl \
diffutils \
file \
git \
make \
patch \
perl \
unzip \
util-linux \
wget \
which \
xz \
yasm \
less \
zstd \
libgomp \
gcc-toolset-${DEVTOOLSET_VERSION}-gcc \
gcc-toolset-${DEVTOOLSET_VERSION}-gcc-c++ \
gcc-toolset-${DEVTOOLSET_VERSION}-binutils \
gcc-toolset-${DEVTOOLSET_VERSION}-gcc-gfortran \
cmake \
rust \
cargo \
llvm-devel \
libzstd-devel \
python3.12-devel \
python3.12-test \
python3.12-setuptools \
python3.12-pip \
python3-virtualenv \
python3.12-pyyaml \
python3.12-numpy \
python3.12-wheel \
python3.12-cryptography \
blas-devel \
openblas-devel \
lapack-devel \
atlas-devel \
libjpeg-devel \
libxslt-devel \
libxml2-devel \
openssl-devel \
valgrind \
ninja-build

ENV PATH=/opt/rh/gcc-toolset-${DEVTOOLSET_VERSION}/root/usr/bin:$PATH
ENV LD_LIBRARY_PATH=/opt/rh/gcc-toolset-${DEVTOOLSET_VERSION}/root/usr/lib64:/opt/rh/gcc-toolset-${DEVTOOLSET_VERSION}/root/usr/lib:$LD_LIBRARY_PATH
ENV CC=/opt/rh/gcc-toolset-${DEVTOOLSET_VERSION}/root/usr/bin/gcc
ENV CXX=/opt/rh/gcc-toolset-${DEVTOOLSET_VERSION}/root/usr/bin/g++
ENV LD=/opt/rh/gcc-toolset-${DEVTOOLSET_VERSION}/root/usr/bin/ld
ENV CFLAGS=""
ENV CXXFLAGS=""
# git236+ would refuse to run git commands in repos owned by other users
# Which causes version check to fail, as pytorch repo is bind-mounted into the image
# Override this behaviour by treating every folder as safe
# For more details see https://github.com/pytorch/pytorch/issues/78659#issuecomment-1144107327
RUN git config --global --add safe.directory "*"

# installed python doesn't have development parts. Rebuild it from scratch
RUN /bin/rm -rf /opt/_internal /opt/python /usr/local/*/*

# EPEL for cmake
FROM base as patchelf
RUN git clone --depth 1 --branch temp-gha-runner-v3 https://github.com/sandeepgupta12/pytorch.git /tmp/pytorch && \
mkdir -p /build_scripts && \
cp /tmp/pytorch/.ci/docker/common/install_patchelf.sh /build_scripts/install_patchelf.sh
# Install patchelf
#ADD ./common/install_patchelf.sh install_patchelf.sh
#RUN bash ./install_patchelf.sh && rm install_patchelf.sh
RUN bash /build_scripts/install_patchelf.sh && rm -r /build_scripts
RUN cp $(which patchelf) /patchelf

FROM patchelf as python
# build python
# Clone only required scripts from the PyTorch repo
RUN mkdir -p /build_scripts && \
cp -r /tmp/pytorch/.ci/docker/manywheel/build_scripts/* /build_scripts/ && \
cp /tmp/pytorch/.ci/docker/common/install_cpython.sh /build_scripts/install_cpython.sh && \
rm -rf /tmp/pytorch
#COPY manywheel/build_scripts /build_scripts
#ADD ./common/install_cpython.sh /build_scripts/install_cpython.sh
ENV SSL_CERT_FILE=
#RUN bash build_scripts/build.sh && rm -r build_scripts
RUN bash /build_scripts/build.sh && rm -r /build_scripts

FROM base as final
COPY --from=python /opt/python /opt/python
COPY --from=python /opt/_internal /opt/_internal
COPY --from=python /opt/python/cp39-cp39/bin/auditwheel /usr/local/bin/auditwheel
COPY --from=patchelf /usr/local/bin/patchelf /usr/local/bin/patchelf

RUN alternatives --set python /usr/bin/python3.12
RUN alternatives --set python3 /usr/bin/python3.12

RUN pip-3.12 install typing_extensions

ENTRYPOINT []
CMD ["/bin/bash"]

# install test dependencies:
# - grpcio requires system openssl, bundled crypto fails to build
RUN dnf install -y \
hdf5-devel \
python3-h5py \
git


#RUN env GRPC_PYTHON_BUILD_SYSTEM_OPENSSL=True pip3 install grpcio
# cmake-3.28.0 from pip for onnxruntime
RUN python3 -mpip install cmake==3.28.0
8 changes: 7 additions & 1 deletion .ci/docker/manywheel/build.sh
Original file line number Diff line number Diff line change
Expand Up @@ -49,6 +49,12 @@ case ${image} in
DOCKER_GPU_BUILD_ARG=""
MANY_LINUX_VERSION="s390x"
;;
manylinuxppc64le-builder:cpu-ppc64le)
TARGET=final
GPU_IMAGE=redhat/ubi9
DOCKER_GPU_BUILD_ARG=""
MANY_LINUX_VERSION="ppc64le"
;;
manylinux2_28-builder:cuda11*)
TARGET=cuda_final
GPU_IMAGE=amd64/almalinux:8
Expand Down Expand Up @@ -106,7 +112,7 @@ if [[ -n ${MANY_LINUX_VERSION} && -z ${DOCKERFILE_SUFFIX} ]]; then
DOCKERFILE_SUFFIX=_${MANY_LINUX_VERSION}
fi
# Only activate this if in CI
if [ "$(uname -m)" != "s390x" ] && [ -v CI ]; then
if [ "$(uname -m)" != "s390x" ] && "$(uname -m)" != "ppc64le" ] && [ -v CI ]; then
# TODO: Remove LimitNOFILE=1048576 patch once https://github.com/pytorch/test-infra/issues/5712
# is resolved. This patch is required in order to fix timing out of Docker build on Amazon Linux 2023.
sudo sed -i s/LimitNOFILE=infinity/LimitNOFILE=1048576/ /usr/lib/systemd/system/docker.service
Expand Down
35 changes: 33 additions & 2 deletions .ci/docker/manywheel/build_scripts/build.sh
Original file line number Diff line number Diff line change
Expand Up @@ -20,7 +20,7 @@ AUTOCONF_HASH=954bd69b391edc12d6a4a51a2dd1476543da5c6bbf05a95b59dc0dd6fd4c2969
# the final image after compiling Python
PYTHON_COMPILE_DEPS="zlib-devel bzip2-devel ncurses-devel sqlite-devel readline-devel tk-devel gdbm-devel libpcap-devel xz-devel libffi-devel"

if [ "$(uname -m)" != "s390x" ] ; then
if [ "$(uname -m)" != "s390x" ] && [ "$(uname -m)" != "ppc64le" ] ; then
PYTHON_COMPILE_DEPS="${PYTHON_COMPILE_DEPS} db4-devel"
else
PYTHON_COMPILE_DEPS="${PYTHON_COMPILE_DEPS} libdb-devel"
Expand All @@ -39,7 +39,38 @@ yum -y install bzip2 make git patch unzip bison yasm diffutils \
${PYTHON_COMPILE_DEPS}

# Install newest autoconf
build_autoconf $AUTOCONF_ROOT $AUTOCONF_HASH
# If the architecture is not ppc64le, use the existing build_autoconf function
if [ "$(uname -m)" != "ppc64le" ] ; then
build_autoconf $AUTOCONF_ROOT $AUTOCONF_HASH
# else
# curl -sLO http://ftp.gnu.org/gnu/autoconf/$AUTOCONF_ROOT.tar.gz

# echo "$AUTOCONF_HASH $AUTOCONF_ROOT.tar.gz" | sha256sum -c -

# tar -xzf $AUTOCONF_ROOT.tar.gz
# cd $AUTOCONF_ROOT

# mkdir -p build-aux

# curl -fLo /tmp/config.guess https://git.savannah.gnu.org/cgit/config.git/plain/config.guess
# curl -fLo /tmp/config.sub https://git.savannah.gnu.org/cgit/config.git/plain/config.sub

# chmod +x /tmp/config.guess /tmp/config.sub

# mv /tmp/config.guess build-aux/config.guess
# mv /tmp/config.sub build-aux/config.sub

# ls -lh build-aux/config.*

# ./build-aux/config.guess || echo "Failed to detect architecture"

# ./configure --host=powerpc64le-pc-linux-gnu
# make -j$(nproc)
# make install

# cd ..
# rm -rf $AUTOCONF_ROOT $AUTOCONF_ROOT.tar.gz
fi
autoconf --version

# Compile the latest Python releases.
Expand Down
2 changes: 1 addition & 1 deletion .ci/docker/manywheel/build_scripts/manylinux1-check.py
Original file line number Diff line number Diff line change
Expand Up @@ -5,7 +5,7 @@ def is_manylinux1_compatible():
# Only Linux, and only x86-64 / i686
from distutils.util import get_platform

if get_platform() not in ["linux-x86_64", "linux-i686", "linux-s390x"]:
if get_platform() not in ["linux-x86_64", "linux-i686", "linux-s390x", "linux-ppc64le"]:
return False

# Check for presence of _manylinux module
Expand Down
2 changes: 1 addition & 1 deletion .ci/manywheel/build.sh
Original file line number Diff line number Diff line change
Expand Up @@ -11,7 +11,7 @@ case "${GPU_ARCH_TYPE:-BLANK}" in
rocm)
bash "${SCRIPTPATH}/build_rocm.sh"
;;
cpu | cpu-cxx11-abi | cpu-aarch64 | cpu-s390x)
cpu | cpu-cxx11-abi | cpu-aarch64 | cpu-s390x | cpu-ppc64le)
bash "${SCRIPTPATH}/build_cpu.sh"
;;
xpu)
Expand Down
4 changes: 2 additions & 2 deletions .ci/manywheel/build_common.sh
Original file line number Diff line number Diff line change
Expand Up @@ -363,7 +363,7 @@ for pkg in /$WHEELHOUSE_DIR/torch_no_python*.whl /$WHEELHOUSE_DIR/torch*linux*.w
done

# create Manylinux 2_28 tag this needs to happen before regenerate the RECORD
if [[ $PLATFORM == "manylinux_2_28_x86_64" && $GPU_ARCH_TYPE != "cpu-s390x" && $GPU_ARCH_TYPE != "xpu" ]]; then
if [[ $PLATFORM == "manylinux_2_28_x86_64" && $GPU_ARCH_TYPE != "cpu-s390x" && $GPU_ARCH_TYPE != "cpu-ppc64le" && $GPU_ARCH_TYPE != "xpu" ]]; then
wheel_file=$(echo $(basename $pkg) | sed -e 's/-cp.*$/.dist-info\/WHEEL/g')
sed -i -e s#linux_x86_64#"${PLATFORM}"# $wheel_file;
fi
Expand Down Expand Up @@ -412,7 +412,7 @@ for pkg in /$WHEELHOUSE_DIR/torch_no_python*.whl /$WHEELHOUSE_DIR/torch*linux*.w
fi

# Rename wheel for Manylinux 2_28
if [[ $PLATFORM == "manylinux_2_28_x86_64" && $GPU_ARCH_TYPE != "cpu-s390x" && $GPU_ARCH_TYPE != "xpu" ]]; then
if [[ $PLATFORM == "manylinux_2_28_x86_64" && $GPU_ARCH_TYPE != "cpu-s390x" && $GPU_ARCH_TYPE != "cpu-ppc64le" && $GPU_ARCH_TYPE != "xpu" ]]; then
pkg_name=$(echo $(basename $pkg) | sed -e s#linux_x86_64#"${PLATFORM}"#)
zip -rq $pkg_name $PREIX*
rm -f $pkg
Expand Down
2 changes: 2 additions & 0 deletions .ci/manywheel/build_cpu.sh
Original file line number Diff line number Diff line change
Expand Up @@ -65,6 +65,8 @@ elif [[ "$OS_NAME" == *"AlmaLinux"* ]]; then
elif [[ "$OS_NAME" == *"Ubuntu"* ]]; then
if [[ "$ARCH" == "s390x" ]]; then
LIBGOMP_PATH="/usr/lib/s390x-linux-gnu/libgomp.so.1"
elif [[ "$(uname -m)" == "ppc64le" ]]; then
LIBGOMP_PATH="/usr/lib64/libgomp.so.1"
elif [[ "$ARCH" == "aarch64" ]]; then
LIBGOMP_PATH="/usr/lib/aarch64-linux-gnu/libgomp.so.1"
else
Expand Down
16 changes: 12 additions & 4 deletions .ci/pytorch/build.sh
Original file line number Diff line number Diff line change
@@ -1,7 +1,10 @@
#!/bin/bash

set -ex -o pipefail

# which python
# export PATH="/opt/python/cp312-cp312/bin:$PATH"
# export PYTHON_EXECUTABLE="/opt/python/cp312-cp312/bin/python3.12"
# which python
# Required environment variable: $BUILD_ENVIRONMENT
# (This is set by default in the Docker images we build, so you don't
# need to set it yourself.
Expand Down Expand Up @@ -111,6 +114,11 @@ if [[ "$BUILD_ENVIRONMENT" == *riscv64* ]]; then
export SLEEF_TARGET_EXEC_USE_QEMU=ON
sudo chown -R jenkins /var/lib/jenkins/workspace /opt

fi
if [[ "$BUILD_ENVIRONMENT" == *ppc64le* ]]; then
export USE_MKLDNN=0
export USE_MKLDNN_ACL=0

fi

if [[ "$BUILD_ENVIRONMENT" == *libtorch* ]]; then
Expand Down Expand Up @@ -243,7 +251,7 @@ fi

# Do not change workspace permissions for ROCm and s390x CI jobs
# as it can leave workspace with bad permissions for cancelled jobs
if [[ "$BUILD_ENVIRONMENT" != *rocm* && "$BUILD_ENVIRONMENT" != *s390x* && "$BUILD_ENVIRONMENT" != *riscv64* && -d /var/lib/jenkins/workspace ]]; then
if [[ "$BUILD_ENVIRONMENT" != *rocm* && "$BUILD_ENVIRONMENT" != *s390x* && "$BUILD_ENVIRONMENT" != *ppc64le* && "$BUILD_ENVIRONMENT" != *riscv64* && -d /var/lib/jenkins/workspace ]]; then
# Workaround for dind-rootless userid mapping (https://github.com/pytorch/ci-infra/issues/96)
WORKSPACE_ORIGINAL_OWNER_ID=$(stat -c '%u' "/var/lib/jenkins/workspace")
cleanup_workspace() {
Expand Down Expand Up @@ -288,7 +296,7 @@ else
# XLA test build fails when WERROR=1
# set only when building other architectures
# or building non-XLA tests.
if [[ "$BUILD_ENVIRONMENT" != *rocm* && "$BUILD_ENVIRONMENT" != *xla* && "$BUILD_ENVIRONMENT" != *riscv64* ]]; then
if [[ "$BUILD_ENVIRONMENT" != *rocm* && "$BUILD_ENVIRONMENT" != *xla* && "$BUILD_ENVIRONMENT" != *ppc64le* && "$BUILD_ENVIRONMENT" != *riscv64* ]]; then
# Install numpy-2.0.2 for builds which are backward compatible with 1.X
python -mpip install numpy==2.0.2

Expand Down Expand Up @@ -431,6 +439,6 @@ if [[ "$BUILD_ENVIRONMENT" != *libtorch* && "$BUILD_ENVIRONMENT" != *bazel* ]];
PYTHONPATH=. python tools/stats/export_test_times.py
fi
# don't do this for bazel or s390x or riscv64 as they don't use sccache
if [[ "$BUILD_ENVIRONMENT" != *s390x* && "$BUILD_ENVIRONMENT" != *riscv64* && "$BUILD_ENVIRONMENT" != *-bazel-* ]]; then
if [[ "$BUILD_ENVIRONMENT" != *s390x* && "$BUILD_ENVIRONMENT" != *ppc64le* && "$BUILD_ENVIRONMENT" != *riscv64* && "$BUILD_ENVIRONMENT" != *-bazel-* ]]; then
print_sccache_stats
fi
6 changes: 3 additions & 3 deletions .ci/pytorch/check_binary.sh
Original file line number Diff line number Diff line change
Expand Up @@ -189,7 +189,7 @@ fi
if [[ "$PACKAGE_TYPE" == 'libtorch' ]]; then
echo "Checking that MKL is available"
build_and_run_example_cpp check-torch-mkl
elif [[ "$(uname -m)" != "arm64" && "$(uname -m)" != "s390x" ]]; then
elif [[ "$(uname -m)" != "arm64" && "$(uname -m)" != "s390x" && "$(uname -m)" != "ppc64le" ]]; then
if [[ "$(uname)" != 'Darwin' || "$PACKAGE_TYPE" != *wheel ]]; then
if [[ "$(uname -m)" == "aarch64" ]]; then
echo "Checking that MKLDNN is available on aarch64"
Expand All @@ -213,7 +213,7 @@ if [[ "$PACKAGE_TYPE" == 'libtorch' ]]; then
echo "Checking that XNNPACK is available"
build_and_run_example_cpp check-torch-xnnpack
else
if [[ "$(uname)" != 'Darwin' || "$PACKAGE_TYPE" != *wheel ]] && [[ "$(uname -m)" != "s390x" ]]; then
if [[ "$(uname)" != 'Darwin' || "$PACKAGE_TYPE" != *wheel ]] && [[ "$(uname -m)" != "s390x" && "$(uname -m)" != "ppc64le" ]]; then
echo "Checking that XNNPACK is available"
pushd /tmp
python -c 'import torch.backends.xnnpack; exit(0 if torch.backends.xnnpack.enabled else 1)'
Expand Down Expand Up @@ -243,7 +243,7 @@ fi

# Test that CUDA builds are setup correctly
# Skip CUDA hardware checks for aarch64 as they run on CPU-only runners
if [[ "$DESIRED_CUDA" != 'cpu' && "$DESIRED_CUDA" != 'xpu' && "$DESIRED_CUDA" != 'cpu-cxx11-abi' && "$DESIRED_CUDA" != *"rocm"* && "$(uname -m)" != "s390x" && "$(uname -m)" != "aarch64" ]]; then
if [[ "$DESIRED_CUDA" != 'cpu' && "$DESIRED_CUDA" != 'xpu' && "$DESIRED_CUDA" != 'cpu-cxx11-abi' && "$DESIRED_CUDA" != *"rocm"* && "$(uname -m)" != "s390x" && "$(uname -m)" != "ppc64le" && "$(uname -m)" != "aarch64" ]]; then
if [[ "$PACKAGE_TYPE" == 'libtorch' ]]; then
build_and_run_example_cpp check-torch-cuda
else
Expand Down
8 changes: 4 additions & 4 deletions .circleci/scripts/binary_populate_env.sh
Original file line number Diff line number Diff line change
Expand Up @@ -39,9 +39,9 @@ fi

USE_GOLD_LINKER="OFF"
# GOLD linker can not be used if CUPTI is statically linked into PyTorch, see https://github.com/pytorch/pytorch/issues/57744
if [[ ${DESIRED_CUDA} == "cpu" ]]; then
USE_GOLD_LINKER="ON"
fi
# if [[ ${DESIRED_CUDA} == "cpu" ]]; then
# USE_GOLD_LINKER="ON"
# fi


# Default to nightly, since that's where this normally uploads to
Expand Down Expand Up @@ -112,7 +112,7 @@ if [[ "$PACKAGE_TYPE" =~ .*wheel.* && -n "$PYTORCH_BUILD_VERSION" && "$PYTORCH_B
fi

USE_GLOO_WITH_OPENSSL="ON"
if [[ "$GPU_ARCH_TYPE" =~ .*aarch64.* ]]; then
if [[ "$GPU_ARCH_TYPE" =~ .*cpu-ppc64le.* ]]; then
USE_GLOO_WITH_OPENSSL="OFF"
USE_GOLD_LINKER="OFF"
fi
Expand Down
4 changes: 2 additions & 2 deletions .github/actions/test-pytorch-binary/action.yml
Original file line number Diff line number Diff line change
Expand Up @@ -40,9 +40,9 @@ runs:
docker exec -t "${container_name}" bash -c "source ${BINARY_ENV_FILE} && bash -x /run.sh"

- name: Cleanup docker
if: always() && (env.BUILD_ENVIRONMENT == 'linux-s390x-binary-manywheel' || env.GPU_ARCH_TYPE == 'xpu')
if: always() && (env.BUILD_ENVIRONMENT == 'linux-s390x-binary-manywheel' || env.BUILD_ENVIRONMENT != 'linux-ppc64le-binary-manywheel' || env.GPU_ARCH_TYPE == 'xpu')
shell: bash
run: |
# on s390x or xpu stop the container for clean worker stop
# on s390x or ppc64le or xpu stop the container for clean worker stop
# shellcheck disable=SC2046
docker stop "${{ env.CONTAINER_NAME }}" || true
Loading
Loading