Skip to content

Commit

Permalink
Make CIs obey environment.yaml and ignoring useless steps (#582)
Browse files Browse the repository at this point in the history
* CI Dockerfile uses environment.yaml. Ignore useless CI steps when using dockerized queuing system

* Fix Docker build with environment.yml and add docs
  • Loading branch information
guillaumeeb committed Aug 30, 2022
1 parent fbcc555 commit 2236edb
Show file tree
Hide file tree
Showing 13 changed files with 89 additions and 18 deletions.
1 change: 1 addition & 0 deletions .github/workflows/build-docker-images.yaml
Original file line number Diff line number Diff line change
Expand Up @@ -26,6 +26,7 @@ jobs:
shell: bash -l {0}
run: |
cd ./ci/${{ matrix.jobqueue }}
cp ../environment.yml environment.yml
docker-compose build
- name: List images
run: |
Expand Down
11 changes: 7 additions & 4 deletions .github/workflows/ci.yaml
Original file line number Diff line number Diff line change
Expand Up @@ -22,25 +22,28 @@ jobs:
- name: Checkout source
uses: actions/checkout@v2

- name: Setup Conda Environment
- name: Setup Empty Conda Environment with Mamba
if: matrix.jobqueue == 'none'
uses: conda-incubator/setup-miniconda@v2
with:
channels: conda-forge
mamba-version: "*"
activate-environment: dask-jobqueue
auto-activate-base: false

- name: Setup conda environment
- name: Setup dask-jobqueue conda environment
if: matrix.jobqueue == 'none'
run: |
mamba env update -f ci/environment.yml
mamba list
- name: Setup
- name: Setup Job queuing system
if: matrix.jobqueue != 'none'
run: |
source ci/${{ matrix.jobqueue }}.sh
jobqueue_before_install
- name: Install
- name: Install dask-jobqueue
run: |
source ci/${{ matrix.jobqueue }}.sh
jobqueue_install
Expand Down
6 changes: 5 additions & 1 deletion .gitignore
Original file line number Diff line number Diff line change
Expand Up @@ -14,4 +14,8 @@ log
.cache/
.pytest_cache
docs/source/generated
dask-worker-space/
dask-worker-space/
ci/slurm/environment.yml
ci/pbs/environment.yml
ci/sge/environment.yml
ci/htcondor/environment.yml
1 change: 1 addition & 0 deletions ci/htcondor.sh
Original file line number Diff line number Diff line change
Expand Up @@ -7,6 +7,7 @@ function jobqueue_before_install {
# start htcondor cluster
cd ./ci/htcondor
docker-compose pull
cp ../environment.yml environment.yml
docker-compose build
./start-htcondor.sh
docker-compose exec -T submit /bin/bash -c "condor_status"
Expand Down
4 changes: 3 additions & 1 deletion ci/htcondor/Dockerfile
Original file line number Diff line number Diff line change
Expand Up @@ -5,7 +5,9 @@ RUN curl -o miniconda.sh https://repo.anaconda.com/miniconda/Miniconda3-latest-L
/opt/anaconda/bin/conda clean -tipy && \
rm -f miniconda.sh
ENV PATH /opt/anaconda/bin:$PATH
RUN conda install --yes -c conda-forge python=3.8 dask distributed flake8 pytest pytest-asyncio
# environment.yml file is copied by CI script. If manually building, you should copy it too from parent directory
COPY environment.yml .
RUN conda env update -n base --file environment.yml

FROM htcondor/execute:el7 as execute

Expand Down
1 change: 1 addition & 0 deletions ci/pbs.sh
Original file line number Diff line number Diff line change
Expand Up @@ -7,6 +7,7 @@ function jobqueue_before_install {
# start pbs cluster
cd ./ci/pbs
docker-compose pull
cp ../environment.yml environment.yml
docker-compose build
./start-pbs.sh
cd -
Expand Down
4 changes: 3 additions & 1 deletion ci/pbs/Dockerfile
Original file line number Diff line number Diff line change
Expand Up @@ -30,7 +30,9 @@ RUN curl -o miniconda.sh https://repo.anaconda.com/miniconda/Miniconda3-latest-L
bash miniconda.sh -f -b -p /opt/anaconda && \
/opt/anaconda/bin/conda clean -tipy && \
rm -f miniconda.sh
RUN conda install --yes -c conda-forge python=3.8 dask distributed flake8 pytest pytest-asyncio
# environment.yml file is copied by CI script. If manually building, you should copy it too from parent directory
COPY environment.yml .
RUN conda env update -n base --file environment.yml

# Copy entrypoint and other needed scripts
COPY ./*.sh /
Expand Down
1 change: 1 addition & 0 deletions ci/sge.sh
Original file line number Diff line number Diff line change
Expand Up @@ -7,6 +7,7 @@ function jobqueue_before_install {
# start sge cluster
cd ./ci/sge
docker-compose pull
cp ../environment.yml environment.yml
docker-compose build
./start-sge.sh
cd -
Expand Down
5 changes: 3 additions & 2 deletions ci/sge/Dockerfile
Original file line number Diff line number Diff line change
Expand Up @@ -9,8 +9,9 @@ RUN curl -o miniconda.sh https://repo.anaconda.com/miniconda/Miniconda3-latest-L
/opt/anaconda/bin/conda clean -tipy && \
rm -f miniconda.sh
ENV PATH /opt/anaconda/bin:$PATH
ARG PYTHON_VERSION
RUN conda install -c conda-forge python=$PYTHON_VERSION dask distributed pytest pytest-asyncio && conda clean -tipy
# environment.yml file is copied by CI script. If manually building, you should copy it too from parent directory
COPY environment.yml .
RUN conda env update -n base --file environment.yml

COPY ./*.sh /
COPY ./*.txt /
Expand Down
4 changes: 0 additions & 4 deletions ci/sge/docker-compose.yml
Original file line number Diff line number Diff line change
Expand Up @@ -7,8 +7,6 @@ services:
build:
context: .
target: master
args:
PYTHON_VERSION: 3.8
container_name: sge_master
hostname: sge_master
#network_mode: host
Expand All @@ -21,8 +19,6 @@ services:
build:
context: .
target: slave
args:
PYTHON_VERSION: 3.8
container_name: slave_one
hostname: slave_one
#network_mode: host
Expand Down
1 change: 1 addition & 0 deletions ci/slurm.sh
Original file line number Diff line number Diff line change
Expand Up @@ -7,6 +7,7 @@ function jobqueue_before_install {
# start slurm cluster
cd ./ci/slurm
docker-compose pull
cp ../environment.yml environment.yml
docker-compose build
./start-slurm.sh
cd -
Expand Down
4 changes: 3 additions & 1 deletion ci/slurm/Dockerfile
Original file line number Diff line number Diff line change
Expand Up @@ -7,7 +7,9 @@ RUN curl -o miniconda.sh https://repo.anaconda.com/miniconda/Miniconda3-latest-L
/opt/anaconda/bin/conda clean -tipy && \
rm -f miniconda.sh
ENV PATH /opt/anaconda/bin:$PATH
RUN conda install --yes -c conda-forge python=3.8 dask distributed flake8 pytest pytest-asyncio
# environment.yml file is copied by CI script. If manually building, you should copy it too from parent directory
COPY environment.yml .
RUN conda env update -n base --file environment.yml

ENV LC_ALL en_US.UTF-8

Expand Down
64 changes: 60 additions & 4 deletions docs/source/develop.rst
Original file line number Diff line number Diff line change
Expand Up @@ -32,15 +32,15 @@ When you’re done making changes, check that your changes pass flake8 checks an

To get flake8 and black, just pip install them. You can also use pre-commit to add them as pre-commit hooks.

Test
----
Test without a Job scheduler
----------------------------

Test using ``pytest``::

pytest dask_jobqueue --verbose

Test with Job scheduler
-----------------------
Test with a dockerized Job scheduler
------------------------------------

Some tests require to have a fully functional job queue cluster running, this
is done through Docker_ and `Docker compose`_ tools. You must thus have them
Expand All @@ -57,3 +57,59 @@ For example for PBS you can run::
.. _Docker: https://www.docker.com/
.. _`Docker compose`: https://docs.docker.com/compose/

Building the Docker Images
--------------------------

Under the hood, the CI commands use or build Docker images.
You can also build these Docker images on your local computer if you need to update them.

For Slurm for example::

cd ci/slurm
cp ../environment.yml environment.yml #The Dockerfile needs the reference Conda environment file in its context to build
docker-compose build

You might want to stop your dockerized cluster and refresh the build if you have done this previously::

docker-compose down
docker-compose build --no-cache

Testing without CI scripts
--------------------------

You can also manually launch tests with dockerized jobs schedulers (without CI commands),
for a better understanding of what is going on.
This is basically a simplified version of what is in the ci/*.sh files.
For example with Slurm::

cd ci/slurm
docker-compose pull
# Start a Slurm dockerized cluster
./start-slurm.sh #which is doing docker-compose up -d --no-build
# Install dask-jobqueue in Docker container
docker exec slurmctld /bin/bash -c "cd /dask-jobqueue; pip install -e ."
# Run the tests for slurm
docker exec slurmctld /bin/bash -c "pytest /dask-jobqueue/dask_jobqueue --verbose -E slurm -s"

You can then shutdown the dockerized cluster and remove all the containers from your computer::

docker-compose down

Test on a real Job queuing system
---------------------------------

If you have installed dask-jobqueue on an HPC Center with a working Job Scheduler,
you can also launch the tests requiring one from there.
Those are the tests with the @pytest.mark.env("scheduler-name") pytest fixture before the function.

With a cluster "cluster-name" (which needs to match the name in the pytest.mark.env)::

pytest dask_jobqueue -E <cluster-name>

So for example with a Slurm cluster::

pytest dask_jobqueue -E slurm

Note that this last feature has not been thoroughly tested, and you might run into timeout
issues or other unexpected failures depending on your Job Scheduler configuration and load.

0 comments on commit 2236edb

Please sign in to comment.