Skip to content
New issue

Have a question about this project? Sign up for a free GitHub account to open an issue and contact its maintainers and the community.

By clicking “Sign up for GitHub”, you agree to our terms of service and privacy statement. We’ll occasionally send you account related emails.

Already on GitHub? Sign in to your account

Add scatter CI #350

Merged
merged 14 commits into from
Dec 22, 2023
Merged

Add scatter CI #350

merged 14 commits into from
Dec 22, 2023

Conversation

edmundmiller
Copy link
Collaborator

@edmundmiller edmundmiller commented Oct 19, 2023

Uses a GitHub runner per test to significantly speed up CI times.

It does break the flaky index tests.

@edmundmiller edmundmiller changed the base branch from master to dev October 19, 2023 23:03
@github-actions
Copy link

github-actions bot commented Oct 19, 2023

nf-core lint overall result: Failed ❌

Posted for pipeline commit 9d88d67

+| ✅ 151 tests passed       |+
#| ❔   4 tests were ignored |#
-| ❌   4 tests failed       |-

❌ Test failures:

❔ Tests ignored:

✅ Tests passed:

Run details

  • nf-core/tools version 2.11.1
  • Run at 2023-12-22 21:01:56

@nf-core nf-core deleted a comment from github-actions bot Oct 19, 2023
@edmundmiller edmundmiller self-assigned this Oct 20, 2023
@edmundmiller edmundmiller marked this pull request as ready for review October 20, 2023 01:10
@edmundmiller edmundmiller requested a review from a team as a code owner October 20, 2023 01:10
Copy link
Contributor

@sateeshperi sateeshperi left a comment

Choose a reason for hiding this comment

The reason will be displayed to describe this comment to others. Learn more.

let's go and if we encounter any issues i'm sure we can deal with them

auto-merge was automatically disabled November 7, 2023 03:08

Pull Request is not mergeable

@edmundmiller
Copy link
Collaborator Author

Sounds good, once I get the tests passing in this PR at least 😆

tests/pipeline/bismark/main.nf.test Outdated Show resolved Hide resolved
tests/pipeline/bismark/main.nf.test Outdated Show resolved Hide resolved
tests/pipeline/bismark_hisat/main.nf.test Outdated Show resolved Hide resolved
tests/pipeline/bismark_hisat/main.nf.test Outdated Show resolved Hide resolved
tests/pipeline/bwameth/main.nf.test Outdated Show resolved Hide resolved
tests/pipeline/bwameth/main.nf.test Outdated Show resolved Hide resolved
Copy link
Contributor

@adamrtalbot adamrtalbot left a comment

Choose a reason for hiding this comment

The reason will be displayed to describe this comment to others. Learn more.

nf-test list is nice but running an entire job to find the *.test files feels very wasteful.

subworkflows/local/prepare_genome.nf Outdated Show resolved Hide resolved
Comment on lines +69 to +76
- name: Cache Nextflow installation
id: cache-software
uses: actions/cache@v3
with:
path: test-datasets/
key: ${{ steps.hash_workspace.outputs.digest }}

- name: Check out test data
uses: actions/checkout@v3
with:
repository: nf-core/test-datasets
ref: methylseq
path: test-datasets/

- name: Replace remote paths in samplesheets
run: |
for f in ./test-datasets/samplesheet/*csv; do
sed -i 's=https://github.com/nf-core/test-datasets/raw/methylseq/=./test-datasets/=g' $f
sed -i 's=https://raw.githubusercontent.com/nf-core/test-datasets/methylseq/=./test-datasets/=g' $f
echo "========== $f ============"
cat $f
echo "========================================"
done;
path: |
/usr/local/bin/nf-test
/home/runner/.nf-test/nf-test.jar
key: methylseq-${{ runner.os }}-${{ matrix.NXF_VER }}
Copy link
Contributor

Choose a reason for hiding this comment

The reason will be displayed to describe this comment to others. Learn more.

Let's add Nextflow to this cache as well.

Comment on lines +34 to +35
- name: Setup Nextflow
uses: nf-core/setup-nextflow@v1
Copy link
Contributor

Choose a reason for hiding this comment

The reason will be displayed to describe this comment to others. Learn more.

We're going to hit the API limits again, I think we should add Nextflow to the software cache (see other comment).

- name: List nf-test
id: list
run: |
echo "tests=$(nf-test list --silent --format=json)" >> "$GITHUB_OUTPUT"
Copy link
Contributor

Choose a reason for hiding this comment

The reason will be displayed to describe this comment to others. Learn more.

I didn't know about this...

Copy link
Contributor

Choose a reason for hiding this comment

The reason will be displayed to describe this comment to others. Learn more.

Could you run this at dev time instead of in the action? It seems wasteful to create an entire job that runs this every single time when it only changes when you run the test.

Copy link
Contributor

Choose a reason for hiding this comment

The reason will be displayed to describe this comment to others. Learn more.

We could create an nf-test pre-commit...

Copy link
Collaborator Author

Choose a reason for hiding this comment

The reason will be displayed to describe this comment to others. Learn more.

That's an option. But how is this worse than checking for changes in modules?

We'll also be adding the check for changes to this job as well to run only CI jobs on changed code.

- name: Run nf-test
run: |
nf-test test --tag ${{ matrix.test_tags }} --profile "test,${{ matrix.profile }}" --junitxml=test.xml
nf-test test \
--profile="test,${{ matrix.profile }}" \
Copy link
Contributor

Choose a reason for hiding this comment

The reason will be displayed to describe this comment to others. Learn more.

I'm still tempted to group up conda, singularity and docker onto the same job to make things more efficient. Sure, they will run sequentially which will slow things down but we will reduce our load on GHA by 1/3rd. Food for thought, not necessary right now.

Copy link
Collaborator Author

Choose a reason for hiding this comment

The reason will be displayed to describe this comment to others. Learn more.

Here's my arguement against that. I don't think efficiency is the only thing to optomize for here.

It's a developer tool, and I think that will confuse the pipeline developers and new contributors.

It's also the same amount of wall time, if you don't include the setup. I'd rather spend our time optimizing the setup and cacheing properly to cut down on that.

The other thing splitting up the jobs allow us to do is to fail fast. Instead of getting through docker and singularity and then condad failing a checksum, we then have to rerun all three, instead of just conda. So we end up using 3x the walltime anyways if the CI doesn't work perfectly the first time.

Copy link
Contributor

Choose a reason for hiding this comment

The reason will be displayed to describe this comment to others. Learn more.

You don't seem to have any check that could be used for branch protection?

Copy link
Collaborator Author

Choose a reason for hiding this comment

The reason will be displayed to describe this comment to others. Learn more.

Good catch!

edmundmiller and others added 2 commits December 22, 2023 06:39
Co-authored-by: Adam Talbot <12817534+adamrtalbot@users.noreply.github.com>
Co-authored-by: Sateesh_Peri <33637490+sateeshperi@users.noreply.github.com>
edmundmiller and others added 3 commits December 22, 2023 10:34
Again

Co-authored-by: Sateesh_Peri <33637490+sateeshperi@users.noreply.github.com>
Co-authored-by: sateeshperi <sateeshperi@users.noreply.github.com>
Sign up for free to join this conversation on GitHub. Already have an account? Sign in to comment
Labels
None yet
Projects
None yet
Development

Successfully merging this pull request may close these issues.

None yet

3 participants