Skip to content
New issue

Have a question about this project? Sign up for a free GitHub account to open an issue and contact its maintainers and the community.

By clicking “Sign up for GitHub”, you agree to our terms of service and privacy statement. We’ll occasionally send you account related emails.

Already on GitHub? Sign in to your account

[MAINT] using tox to test in different environments #4029

Merged
merged 51 commits into from
Nov 21, 2023
Merged
Show file tree
Hide file tree
Changes from 50 commits
Commits
Show all changes
51 commits
Select commit Hold shift + click to select a range
30b6800
try using tox
Remi-Gau Oct 4, 2023
3999d67
fix
Remi-Gau Oct 4, 2023
9623301
remove old test workflow
Remi-Gau Oct 4, 2023
b075425
[DATALAD] Recorded changes
Remi-Gau Oct 4, 2023
8ae39c2
skip doc test for now
Remi-Gau Oct 4, 2023
ef4bdfa
rm github env
Remi-Gau Oct 4, 2023
71088a0
skip test doc
Remi-Gau Oct 4, 2023
baaa385
fix typo
Remi-Gau Oct 4, 2023
de69216
change name steps
Remi-Gau Oct 4, 2023
c265804
rename
Remi-Gau Oct 4, 2023
7aabbc0
typo
Remi-Gau Oct 4, 2023
60caea4
do not use conda
Remi-Gau Oct 4, 2023
06e82aa
test doc
Remi-Gau Oct 4, 2023
fc99673
add set up tools
Remi-Gau Oct 4, 2023
b733de1
install setuptools
Remi-Gau Oct 4, 2023
c8970e3
move build type check
Remi-Gau Oct 4, 2023
38419c9
try some other tests first
Remi-Gau Oct 5, 2023
f4dbbaa
Merge remote-tracking branch 'upstream/main' into tox
Remi-Gau Oct 5, 2023
82028cd
remove testing wxorkflow
Remi-Gau Oct 5, 2023
50a95b1
add comment
Remi-Gau Oct 5, 2023
28eac9f
parametrize tests
Remi-Gau Oct 5, 2023
8cf308e
use extras
Remi-Gau Oct 5, 2023
68d4bdb
flake8
Remi-Gau Oct 5, 2023
e79fd22
try setting username for failing tests
Remi-Gau Oct 5, 2023
81176b3
skip restore on partial builds
Remi-Gau Oct 5, 2023
37007fd
refactor
Remi-Gau Oct 5, 2023
14eba97
simplify linting by relying on pre-commit
Remi-Gau Oct 6, 2023
422b2c8
Merge remote-tracking branch 'upstream/main' into tox
Remi-Gau Oct 6, 2023
3bcabfb
update doc
Remi-Gau Oct 6, 2023
d5aacde
typo
Remi-Gau Oct 6, 2023
efe3ac7
fix fixture
Remi-Gau Oct 6, 2023
c5ed99b
Update doc/maintenance.rst
Remi-Gau Oct 6, 2023
01c37aa
pass env variables and refactor
Remi-Gau Oct 6, 2023
b7d4772
rm fixture
Remi-Gau Oct 6, 2023
18a8ebb
refactor and get coverage in all cases
Remi-Gau Oct 6, 2023
0815ccc
Apply suggestions from code review
Remi-Gau Oct 6, 2023
d066147
modify commands to get coverage in all env
Remi-Gau Oct 6, 2023
4cfeb98
add description
Remi-Gau Oct 6, 2023
6cb5e8f
fix typo
Remi-Gau Oct 6, 2023
333d0a5
update names
Remi-Gau Oct 6, 2023
3b6a3ca
do not use 3.8 for testing with latest
Remi-Gau Oct 7, 2023
c4f5645
allow to pass extra arguments to test env$
Remi-Gau Oct 7, 2023
92739db
Update .github/workflows/build-docs.yml
Remi-Gau Oct 12, 2023
d5fc04c
run test by calling pytest directly
Remi-Gau Oct 19, 2023
497ac61
try to fix coverage
Remi-Gau Oct 19, 2023
747ec9f
test python 3.8 with latest dependencies
Remi-Gau Oct 24, 2023
b68709a
rm coverage.xml
Remi-Gau Oct 24, 2023
6d2e3ee
Merge remote-tracking branch 'upstream/main' into tox
Remi-Gau Nov 17, 2023
9bad1b9
try tox coniditional kaleido install
Remi-Gau Nov 20, 2023
7abc33a
update changelog
Remi-Gau Nov 20, 2023
e136ccd
add comment in tox.ini
Remi-Gau Nov 21, 2023
File filter

Filter by extension

Filter by extension


Conversations
Failed to load comments.
Jump to
Jump to file
Failed to load files.
Diff view
Diff view
17 changes: 9 additions & 8 deletions .github/workflows/README.md
Original file line number Diff line number Diff line change
Expand Up @@ -16,7 +16,7 @@ Automatically comments on a newly open pull request to provide some guidelines,

### black.yml

Runs black code formatter on the codebase both in pull requests and on main. Configurations can be found in [pyproject.toml](/pyproject.toml).
Runs black code formatter on the codebase both in pull requests and on main. Configurations can be found in [pyproject.toml](../../pyproject.toml).
Copy link
Member

Choose a reason for hiding this comment

The reason will be displayed to describe this comment to others. Learn more.

why change the paths? it takes nilearn main as root

Copy link
Collaborator Author

Choose a reason for hiding this comment

The reason will be displayed to describe this comment to others. Learn more.

I tried to use the the link locally (clicking it in VS code) and it failed so I modified to make it work.

I assumed that they would fail in github but you are correct that they work on the main branch.

Just to check they also work on the branch for this PR so changing them should not affect people browsing the repo.

https://github.com/Remi-Gau/nilearn/blob/tox/.github/workflows/README.md


## Building the development documentation

Expand Down Expand Up @@ -76,7 +76,7 @@ Runs only if the workflow in `build-docs.yml` completes successfully. Triggers t

### [.circleci/config.yml](/.circleci/config.yml)

Artifacts hosting and deployment of development docs use CircleCI. See [.circleci/README.md](/.circleci/README.md) for details.
Artifacts hosting and deployment of development docs use CircleCI. See [.circleci/README.md](../../.circleci/README.md) for details.
On a pull request, only the "host" job is run. Then the artifacts can be accessed from the `host_and_deploy_doc` workflow seen under the checks list. Click on "Details" and then on the "host_docs" link on the page that opens. From there you can click on the artifacts tab to see all the html files. If you click on any of them you can then normally navigate the pages from there.
With a merge on main, both "host" and "deploy" jobs are run.

Expand All @@ -92,34 +92,35 @@ It works by calling pytest with an environment variable that will trigger a pyte

### codespell.yml

Checks for spelling errors. Configured in [pyproject.toml](/pyproject.toml). More information here: https://github.com/codespell-project/actions-codespell
Checks for spelling errors. Configured in [pyproject.toml](../../pyproject.toml). More information here: https://github.com/codespell-project/actions-codespell

## PEP8 check

### flake8.yml

Uses flake8 tool to verify code is PEP8 compliant. Configured in [.flake8](/.flake8)
Uses flake8 tool to verify code is PEP8 compliant. Configured in [.flake8](../../.flake8)

## f strings

### f_strings.yml

Checks for f strings in the codebase with [flynt](https://pypi.org/project/flynt/).
Configured in [pyproject.toml](/pyproject.toml)
Checks for f strings in the codebase with [flynt](https:/pypi.org/project/flynt/).
Configured in [pyproject.toml](../../pyproject.toml)
Flynt will check if it automatically convert "format" or "%" strings to "f strings".
This workflow will fail if it finds any potential target to be converted.

## Sort imports automatically

### isort.yml

Sorts Python imports alphabetically and by section. Configured in [pyproject.toml](/pyproject.toml)
Sorts Python imports alphabetically and by section. Configured in [pyproject.toml](../../pyproject.toml)

## Running unit tests

### testing.yml
### test_with_tox.yml

Runs pytest in several environments including several Python and dependencies versions as well as on different systems.
All environments are defined in [tox.ini](../../tox.ini).

## Test installation

Expand Down
142 changes: 67 additions & 75 deletions .github/workflows/build-docs.yml
Original file line number Diff line number Diff line change
Expand Up @@ -17,8 +17,8 @@ on:

jobs:

# Make citation metadata from CITATION.cff is valid.
# as it is used in the documentation build.
# Make citation metadata from CITATION.cff is valid.
# as it is used in the documentation build.
validate_cff:
if: github.repository == 'nilearn/nilearn'
runs-on: ubuntu-latest
Expand All @@ -30,11 +30,11 @@ jobs:
with:
args: --validate

# Steps to build the documentation.
# Steps to build the documentation.
build_docs:
needs: validate_cff
# This prevents this workflow from running on a fork.
# To test this workflow on a fork, uncomment the following line.
# This prevents this workflow from running on a fork.
# To test this workflow on a fork, uncomment the following line.
if: github.repository == 'nilearn/nilearn'
runs-on: ubuntu-latest
timeout-minutes: 360
Expand All @@ -45,18 +45,19 @@ jobs:
defaults:
run:
shell: bash -el {0}

steps:
- name: Source caching
uses: actions/cache@v3
with:
path: .git
key: source-cache-${{ runner.os }}-${{ github.run_id }}
restore-keys: |
source-cache-${{ runner.os }}
restore-keys: source-cache-${{ runner.os }}

- name: Checkout nilearn
uses: actions/checkout@v4
with:
# If pull request, checkout HEAD commit with all commit history
# If pull request, checkout HEAD commit with all commit history
ref: ${{ github.event.pull_request.head.sha }}
fetch-depth: 0
- name: Complete checkout
Expand All @@ -77,23 +78,36 @@ jobs:
git pull --ff-only upstream "refs/pull/$(cat merge.txt)";
fi

# Set up environment
# Set up environment
- name: Install apt packages
run: |
./build_tools/github/build_docs_apt_dependencies.sh
- name: Setup conda
uses: conda-incubator/setup-miniconda@v2
sudo -E apt-get -yq update
sudo -E apt-get -yq --no-install-suggests --no-install-recommends install \
dvipng texlive-latex-base texlive-latex-extra
- name: Setup python
uses: actions/setup-python@v4
with:
auto-activate-base: true
activate-environment: ''
miniconda-version: latest
channels: conda-forge
- name: Install packages in conda env
python-version: '3.12'
# Install the local version of the library, along with both standard and testing-related dependencies
# The `doc` dependency group is included because the build_docs job uses this script.
# See pyproject.toml for dependency group options
- name: Install packages
run: |
./build_tools/github/build_docs_dependencies.sh
python -m pip install --user --upgrade pip setuptools
python -m pip install .[plotting,doc]

# Restore data from cache, or set up caching if data not found or
# [force download] is defined in commit message
# Check if we are doing a full or partial build
- name: Find build type
run: ./build_tools/github/build_type.sh
env:
COMMIT_SHA: ${{ github.event.pull_request.head.sha }}
- name: Verify build type
run: |
echo "PATTERN = $(cat pattern.txt)"
echo "BUILD = $(cat build.txt)"

# Restore data from cache, or set up caching if data not found or
# [force download] is defined in commit message
- name: Determine restore data
run: |
commit_msg=$(git log -2 --format=oneline);
Expand All @@ -105,6 +119,12 @@ jobs:
echo "Data cache will be used if available.";
echo "true" | tee restore.txt;
fi

if [[ "$(cat build.txt)" == "ci-html-noplot" ]]; then
echo "Data cache will not be used for partial builds.";
echo "false" | tee restore.txt;
fi

- name: Get cache key
id: cache-key
run: |
Expand Down Expand Up @@ -134,173 +154,145 @@ jobs:
uses: actions/cache@v3
# ~45 MB
with:
path: |
nilearn_data/adhd
path: nilearn_data/adhd
key: v1-adhd-${{ hashFiles('week_num') }}
- name: Data caching
if: steps.cache-key.outputs.restore == 'true'
uses: actions/cache@v3
# ~32 MB
with:
path: |
nilearn_data/allen_rsn_2011
path: nilearn_data/allen_rsn_2011
key: v1-allen_rsn_2011-${{ hashFiles('week_num') }}
- name: Data caching
if: steps.cache-key.outputs.restore == 'true'
uses: actions/cache@v3
# ~66 MB
with:
path: |
nilearn_data/brainomics_localizer
path: nilearn_data/brainomics_localizer
key: v1-brainomics_localizer-${{ hashFiles('week_num') }}
- name: Data caching
if: steps.cache-key.outputs.restore == 'true'
uses: actions/cache@v3
# ~373 MB
with:
path: |
nilearn_data/development_fmri
path: nilearn_data/development_fmri
key: v1-development_fmri-${{ hashFiles('week_num') }}
- name: Data caching
if: steps.cache-key.outputs.restore == 'true'
uses: actions/cache@v3
# ~832 MB
with:
path: |
nilearn_data/ds000030
path: nilearn_data/ds000030
key: v1-ds000030-${{ hashFiles('week_num') }}
- name: Data caching
if: steps.cache-key.outputs.restore == 'true'
uses: actions/cache@v3
# ~82 MB
with:
path: |
nilearn_data/fiac_nilearn.glm
path: nilearn_data/fiac_nilearn.glm
key: v1-fiac_nilearn.glm-${{ hashFiles('week_num') }}
- name: Data caching
if: steps.cache-key.outputs.restore == 'true'
uses: actions/cache@v3
# ~750 MB
with:
path: |
nilearn_data/fMRI-language-localizer-demo-dataset
path: nilearn_data/fMRI-language-localizer-demo-dataset
key: v1-fMRI-language-localizer-demo-dataset-${{ hashFiles('week_num') }}
- name: Data caching
if: steps.cache-key.outputs.restore == 'true'
uses: actions/cache@v3
# ~30 MB
with:
path: |
nilearn_data/fsl
path: nilearn_data/fsl
key: v1-fsl-${{ hashFiles('week_num') }}
- name: Data caching
if: steps.cache-key.outputs.restore == 'true'
uses: actions/cache@v3
# ~302 MB
with:
path: |
nilearn_data/haxby2001
path: nilearn_data/haxby2001
key: v1-haxby2001-${{ hashFiles('week_num') }}
- name: Data caching
if: steps.cache-key.outputs.restore == 'true'
uses: actions/cache@v3
# ~63 MB
with:
path: |
nilearn_data/icbm152_2009
path: nilearn_data/icbm152_2009
key: v1-icbm152_2009-${{ hashFiles('week_num') }}
- name: Data caching
if: steps.cache-key.outputs.restore == 'true'
uses: actions/cache@v3
# ~105 MB
with:
path: |
nilearn_data/jimura_poldrack_2012_zmaps
path: nilearn_data/jimura_poldrack_2012_zmaps
key: v1-jimura_poldrack_2012_zmaps-${{ hashFiles('week_num') }}
- name: Data caching
if: steps.cache-key.outputs.restore == 'true'
uses: actions/cache@v3
# ~35 MB
with:
path: |
nilearn_data/localizer_first_level
path: nilearn_data/localizer_first_level
key: v1-localizer_first_level-${{ hashFiles('week_num') }}
- name: Data caching
if: steps.cache-key.outputs.restore == 'true'
uses: actions/cache@v3
# ~181 MB
with:
path: |
nilearn_data/miyawaki2008
path: nilearn_data/miyawaki2008
key: v1-miyawaki2008-${{ hashFiles('week_num') }}
- name: Data caching
if: steps.cache-key.outputs.restore == 'true'
uses: actions/cache@v3
# ~85 MB
with:
path: |
nilearn_data/nki_enhanced_surface
path: nilearn_data/nki_enhanced_surface
key: v1-nki_enhanced_surface-${{ hashFiles('week_num') }}
- name: Data caching
if: steps.cache-key.outputs.restore == 'true'
uses: actions/cache@v3
# ~913 MB
with:
path: |
nilearn_data/oasis1
path: nilearn_data/oasis1
key: v1-oasis1-${{ hashFiles('week_num') }}
- name: Data caching
if: steps.cache-key.outputs.restore == 'true'
uses: actions/cache@v3
# ~171 MB
with:
path: |
nilearn_data/smith_2009
path: nilearn_data/smith_2009
key: v1-smith_2009-${{ hashFiles('week_num') }}
- name: Data caching
if: steps.cache-key.outputs.restore == 'true'
uses: actions/cache@v3
# ~230 MB
with:
path: |
nilearn_data/spm_auditory
path: nilearn_data/spm_auditory
key: v1-spm_auditory-${{ hashFiles('week_num') }}
- name: Data caching
if: steps.cache-key.outputs.restore == 'true'
uses: actions/cache@v3
# ~227 MB
with:
path: |
nilearn_data/spm_multimodal_fmri
path: nilearn_data/spm_multimodal_fmri
key: v1-spm_multimodal_fmri-${{ hashFiles('week_num') }}

# Update the authors file and the names file
# in case a contributor has been added to citation.cff
# but did not run the maint_tools/citation_cff_maint.py script.
# Update the authors file and the names file
# in case a contributor has been added to citation.cff
# but did not run the maint_tools/citation_cff_maint.py script.
- name: update AUTHORS.rst and doc/changes/names.rst
run: python maint_tools/citation_cff_maint.py

# Run the doc build. If no data is restored in previous steps, the data
# will be downloaded during the build (this only applies for full builds;
# no data is downloaded for partial builds).
- name: Find build type
run: |
./build_tools/github/build_type.sh
env:
COMMIT_SHA: ${{ github.event.pull_request.head.sha }}
- name: Verify build type
run: |
echo "PATTERN = $(cat pattern.txt)"
echo "BUILD = $(cat build.txt)"
# Set up and launch a virtual browser needed for one example to run
# without stalling the job. The example launches an html in the browser.
# Set up and launch a virtual browser needed for one example to run
# without stalling the job. The example launches an html in the browser.
- name: Set up display server for virtual browser
run: |
Xvfb -ac :99 -screen 0 1280x1024x16 > /dev/null 2>&1 &

# Run the doc build. If no data is restored in previous steps, the data
# will be downloaded during the build (this only applies for full builds;
# no data is downloaded for partial builds).
- name: Build docs
run: |
source activate testenv
echo "Conda active env = $CONDA_DEFAULT_ENV";
cd doc;
set -o pipefail;
PATTERN=$(cat ../pattern.txt) make $(cat ../build.txt) 2>&1 | tee log.txt;
Expand Down