Skip to content

Bff #8653

New issue

Have a question about this project? Sign up for a free GitHub account to open an issue and contact its maintainers and the community.

By clicking “Sign up for GitHub”, you agree to our terms of service and privacy statement. We’ll occasionally send you account related emails.

Already on GitHub? Sign in to your account

Open
wants to merge 37 commits into
base: master
Choose a base branch
from
Open

Bff #8653

wants to merge 37 commits into from

Conversation

LuisHeinzlmeier
Copy link
Contributor

  • Generating cell hashing calls from a matrix of count data by calling GenerateCellHashingCalls
  • This modules was created to be included in the hadge pipeline
  • Currently the module do not work for CLUSTER because the test data is probably too small

@LuisHeinzlmeier LuisHeinzlmeier marked this pull request as ready for review June 17, 2025 08:48
@LuisHeinzlmeier
Copy link
Contributor Author

@nictru, I don't understand why I get this error:

ERROR ~ No such file or directory: https://raw.githubusercontent.com/nf-core/test-datasets/modules/data/genomics/homo_sapiens/10xgenomics/cellranger/hashing_demultiplexing/hto

Copy link
Contributor

@itrujnara itrujnara left a comment

Choose a reason for hiding this comment

The reason will be displayed to describe this comment to others. Learn more.

Looks good. Just a few minor remarks to consider.

@LuisHeinzlmeier
Copy link
Contributor Author

@nictru, why are htodemux changes also included here?

@nictru
Copy link
Contributor

nictru commented Jun 17, 2025

@nictru, why are htodemux changes also included here?

This was because the branch in your fork was not up-to-date with the main branch of the nf-core repo - as you can see it is fixed now, as the branch was updated

@nictru
Copy link
Contributor

nictru commented Jun 17, 2025

@LuisHeinzlmeier your No such file or directory is because URLs can only ever point to files, never to directories. While nextflow can handle directories as inputs, it cannot resolve them when referenced via URL.

The same issue was already encountered and fixed in one of the other demultiplexing modules that reference the same data, as you can see here.

Btw, thanks @itrujnara for the review

@LuisHeinzlmeier, if you wonder what Igor means with Harshil alignment, you can read about it here

@nictru nictru mentioned this pull request Jun 17, 2025
17 tasks
@LuisHeinzlmeier
Copy link
Contributor Author

@itrujnara , thanks for the review! I adopted the feedback.

And, @nictru, thank you very much for the help! I can add the directory-url now as input and the test is running locally (snapshots are up to date). However only for docker I have an error. I think this error might arise from Seurat v4 was just loaded with SeuratObject v5 because according to the container build of seqera Seurat v5 is incompatible with cellhashr.

@itrujnara
Copy link
Contributor

@itrujnara , thanks for the review! I adopted the feedback.

And, @nictru, thank you very much for the help! I can add the directory-url now as input and the test is running locally (snapshots are up to date). However only for docker I have an error. I think this error might arise from Seurat v4 was just loaded with SeuratObject v5 because according to the container build of seqera Seurat v5 is incompatible with cellhashr.

The error originates from the OS (it's a SIGKILL), so it is probably not a Seurat issue. It is more likely that the script OOMs or executes illegal instructions, but just from the CI output it's hard to tell which is the case.

Copy link
Contributor

@itrujnara itrujnara left a comment

Choose a reason for hiding this comment

The reason will be displayed to describe this comment to others. Learn more.

I've found minor issues in the meta. Once they are resolved, I'll approve the PR so you can merge as soon as you fix the CI issues.

@nictru
Copy link
Contributor

nictru commented Jun 21, 2025

For testing purposes I increased the memory limit for all processes from 4GB to 6GB. It seems to work now, so this means that running the BFF tests in docker uses between 4 and 6 GB of memory, while singularity and conda stay below the 4GB limit.

As increasing the memory limit is probably not allowed, we will have to revert this. However, we now know what the problem is and need to reduce the memory footprint of the tests

Copy link
Contributor

@itrujnara itrujnara left a comment

Choose a reason for hiding this comment

The reason will be displayed to describe this comment to others. Learn more.

Everything looks fine now. Feel free to merge once you fix the CI issues.

Sign up for free to join this conversation on GitHub. Already have an account? Sign in to comment
Labels
None yet
Projects
None yet
Development

Successfully merging this pull request may close these issues.

3 participants