Skip to content
New issue

Have a question about this project? Sign up for a free GitHub account to open an issue and contact its maintainers and the community.

By clicking “Sign up for GitHub”, you agree to our terms of service and privacy statement. We’ll occasionally send you account related emails.

Already on GitHub? Sign in to your account

Initial config and linux-x86_64-AVX2-cmake build job only #3390

Closed
wants to merge 1 commit into from
Closed
Show file tree
Hide file tree
Changes from all commits
Commits
File filter

Filter by extension

Filter by extension


Conversations
Failed to load comments.
Jump to
Jump to file
Failed to load files.
Diff view
Diff view
103 changes: 103 additions & 0 deletions .github/workflows/actions/build_cmake/action.yml
Original file line number Diff line number Diff line change
@@ -0,0 +1,103 @@
name: Build cmake
inputs:
opt_level:
description: 'The optimization level'
required: false
default: generic
gpu:
description: 'The GPU to use'
required: false
default: OFF
raft:
description: 'The raft to use'
required: false
default: OFF
runs:
using: composite
steps:
- name: Setup miniconda
uses: conda-incubator/setup-miniconda@v3.0.3
with:
python-version: '3.11'
miniconda-version: latest
- name: Set up environment
shell: bash
run: |
conda config --set solver libmamba
conda update -y -q conda
- name: Install env using main channel
if: inputs.raft == 'OFF'
shell: bash
run: |
conda install -y -q python=3.11 cmake make swig=4.0.2 mkl=2023 mkl-devel=2023 numpy scipy pytest gxx_linux-64 sysroot_linux-64
- name: Install env using conda-forge channel
if: inputs.raft == 'ON'
shell: bash
run: |
conda install -y -q python=3.11 cmake make swig=4.0.2 mkl=2023 mkl-devel=2023 numpy scipy pytest gxx_linux-64 sysroot_linux-64=2.28 libraft cuda-version=11.8 cuda-toolkit -c rapidsai-nightly -c "nvidia/label/cuda-11.8.0" -c conda-forge
- name: Install CUDA
if: inputs.gpu == 'ON' && inputs.raft == 'OFF'
shell: bash
run: |
conda install -y -q cuda-toolkit -c "nvidia/label/cuda-11.8.0"
- name: Build all targets
shell: bash
run: |
eval "$(conda shell.bash hook)"
conda activate
cmake -B build \
-DBUILD_TESTING=ON \
-DBUILD_SHARED_LIBS=ON \
-DFAISS_ENABLE_GPU=${{ inputs.gpu }} \
-DFAISS_ENABLE_RAFT=${{ inputs.raft }} \
-DFAISS_OPT_LEVEL=${{ inputs.opt_level }} \
-DFAISS_ENABLE_C_API=ON \
-DPYTHON_EXECUTABLE=$CONDA/bin/python \
-DCMAKE_BUILD_TYPE=Release \
-DBLA_VENDOR=Intel10_64_dyn \
-DCMAKE_CUDA_FLAGS="-gencode arch=compute_75,code=sm_75" \
.
make -k -C build -j$(nproc)
- name: C++ tests
shell: bash
run: |
export GTEST_OUTPUT="xml:$(realpath .)/test-results/googletest/"
make -C build test
- name: Install Python extension
shell: bash
working-directory: build/faiss/python
run: |
$CONDA/bin/python setup.py install
- name: Install pytest
shell: bash
run: |
conda install -y pytest
echo "$CONDA/bin" >> $GITHUB_PATH
- name: Python tests (CPU only)
if: inputs.gpu == 'OFF'
shell: bash
run: |
conda install -y -q pytorch -c pytorch
pytest --junitxml=test-results/pytest/results.xml tests/test_*.py
pytest --junitxml=test-results/pytest/results-torch.xml tests/torch_*.py
- name: Python tests (CPU + GPU)
if: inputs.gpu == 'ON'
shell: bash
run: |
conda install -y -q pytorch pytorch-cuda=11.8 -c pytorch -c nvidia/label/cuda-11.8.0
pytest --junitxml=test-results/pytest/results.xml tests/test_*.py
pytest --junitxml=test-results/pytest/results-torch.xml tests/torch_*.py
cp tests/common_faiss_tests.py faiss/gpu/test
pytest --junitxml=test-results/pytest/results-gpu.xml faiss/gpu/test/test_*.py
pytest --junitxml=test-results/pytest/results-gpu-torch.xml faiss/gpu/test/torch_*.py
- name: Test avx2 loading
if: inputs.opt_level == 'avx2'
shell: bash
run: |
FAISS_DISABLE_CPU_FEATURES=AVX2 LD_DEBUG=libs $CONDA/bin/python -c "import faiss" 2>&1 | grep faiss.so
LD_DEBUG=libs $CONDA/bin/python -c "import faiss" 2>&1 | grep faiss_avx2.so
- name: Upload test results
uses: actions/upload-artifact@v4.3.1
with:
name: test-results-${{ inputs.opt_level }}-${{ inputs.gpu }}-${{ inputs.raft }}
path: test-results
98 changes: 98 additions & 0 deletions .github/workflows/actions/build_conda/action.yml
Original file line number Diff line number Diff line change
@@ -0,0 +1,98 @@
name: Build conda
description: Build conda
inputs:
label:
description: "Label"
default: ""
required: false
cuda:
description: "cuda"
default: ""
required: false
raft:
description: "raft"
default: ""
required: false
compiler_version:
description: "compiler_version"
default: ""
required: false
runs:
using: composite
steps:
- name: Choose shell
shell: bash
id: choose_shell
run: |
# if runner.os != 'Windows' use bash, else use pwsh
if [ "${{ runner.os }}" != "Windows" ]; then
echo "shell=bash" >> "$GITHUB_OUTPUT"
else
echo "shell=pwsh" >> "$GITHUB_OUTPUT"
fi
- name: Setup miniconda
uses: conda-incubator/setup-miniconda@v3.0.3
with:
python-version: '3.11'
miniconda-version: latest
- name: Install conda build tools
shell: ${{ steps.choose_shell.outputs.shell }}
run: |
# conda config --set solver libmamba
# conda config --set verbosity 3
conda update -y -q conda
conda install -y -q conda-build
- name: Enable anaconda uploads
if: inputs.label != ''
shell: ${{ steps.choose_shell.outputs.shell }}
env:
PACKAGE_TYPE: inputs.label
run: |
conda install -y -q anaconda-client
conda config --set anaconda_upload yes
- name: Conda build (CPU)
if: inputs.label == '' && inputs.cuda == ''
shell: ${{ steps.choose_shell.outputs.shell }}
working-directory: conda
run: |
conda build faiss --python 3.11 -c pytorch
- name: Conda build (CPU) w/ anaconda upload
if: inputs.label != '' && inputs.cuda == ''
shell: ${{ steps.choose_shell.outputs.shell }}
working-directory: conda
env:
PACKAGE_TYPE: inputs.label
run: |
conda build faiss --user pytorch --label ${{ inputs.label }} -c pytorch
- name: Conda build (GPU)
if: inputs.label == '' && inputs.cuda != '' && inputs.raft == ''
shell: ${{ steps.choose_shell.outputs.shell }}
working-directory: conda
run: |
conda build faiss-gpu --variants '{ "cudatoolkit": "${{ inputs.cuda }}", "c_compiler_version": "${{ inputs.compiler_version }}", "cxx_compiler_version": "${{ inputs.compiler_version }}" }' \
-c pytorch -c nvidia/label/cuda-${{ inputs.cuda }} -c nvidia
- name: Conda build (GPU) w/ anaconda upload
if: inputs.label != '' && inputs.cuda != '' && inputs.raft == ''
shell: ${{ steps.choose_shell.outputs.shell }}
working-directory: conda
env:
PACKAGE_TYPE: inputs.label
run: |
conda build faiss-gpu --variants '{ "cudatoolkit": "${{ inputs.cuda }}", "c_compiler_version": "${{ inputs.compiler_version }}", "cxx_compiler_version": "${{ inputs.compiler_version }}" }' \
--user pytorch --label ${{ inputs.label }} -c pytorch -c nvidia/label/cuda-${{ inputs.cuda }} -c nvidia
- name: Conda build (GPU w/ RAFT)
if: inputs.label == '' && inputs.cuda != '' && inputs.raft != ''
shell: ${{ steps.choose_shell.outputs.shell }}
working-directory: conda
run: |
conda build faiss-gpu-raft --variants '{ "cudatoolkit": "${{ inputs.cuda }}", "c_compiler_version": "${{ inputs.compiler_version }}", "cxx_compiler_version": "${{ inputs.compiler_version }}" }' \
-c pytorch -c nvidia/label/cuda-${{ inputs.cuda }} -c nvidia -c rapidsai -c rapidsai-nightly -c conda-forge
- name: Conda build (GPU w/ RAFT) w/ anaconda upload
if: inputs.label != '' && inputs.cuda != '' && inputs.raft != ''
shell: ${{ steps.choose_shell.outputs.shell }}
working-directory: conda
env:
PACKAGE_TYPE: inputs.label
run: |
conda build faiss-gpu-raft --variants '{ "cudatoolkit": "${{ inputs.cuda }}", "c_compiler_version": "${{ inputs.compiler_version }}", "cxx_compiler_version": "${{ inputs.compiler_version }}" }' \
--user pytorch --label ${{ inputs.label }} -c pytorch -c nvidia/label/cuda-${{ inputs.cuda }} -c nvidia -c rapidsai -c rapidsai-nightly -c conda-forge