-
Notifications
You must be signed in to change notification settings - Fork 25
fixed bugs related to maget-brain data prep and tracking options #196
New issue
Have a question about this project? Sign up for a free GitHub account to open an issue and contact its maintainers and the community.
By clicking “Sign up for GitHub”, you agree to our terms of service and privacy statement. We’ll occasionally send you account related emails.
Already on GitHub? Sign in to your account
Conversation
…th, 2) added sqldb generation with ignore list for subjects and datatype+acq, 3) updated sample_run_nipoppy.py to show the these two functionalities.
There was a problem hiding this comment.
Choose a reason for hiding this comment
The reason will be displayed to describe this comment to others. Learn more.
How does maget_brain
fit with the rest of the processing pipelines? It's not containerized (yet), but do we still want to track the runscripts/trackers in the git
repo? What are your thoughts on this?
fmriprep_dir = f"{DATASET_ROOT}/derivatives/fmriprep/{fmriprep_version}/output/" | ||
maget_dir = f"{DATASET_ROOT}/derivatives/maget_brain/{maget_version}/output/" | ||
maget_preproc_T1w_nii_dir = f"{maget_dir}/ses-{session_id}/preproc_T1w_nii/" | ||
maget_proc_list_file = f"{maget_preproc_T1w_nii_dir}proc_participant.csv" |
There was a problem hiding this comment.
Choose a reason for hiding this comment
The reason will be displayed to describe this comment to others. Learn more.
This script is writing a CSV file of participants to process? Who have been processed?
"fmriprep": ["anat"], | ||
"mriqc": ["anat"], | ||
"tractoflow": ["anat", "dwi"], | ||
"maget_brain": ["anat"] |
There was a problem hiding this comment.
Choose a reason for hiding this comment
The reason will be displayed to describe this comment to others. Learn more.
So we have a tracker for maget_brain
but it is not tracked by git
?
We also don't have a runscript for it... because it's not containerized?
There was a problem hiding this comment.
Choose a reason for hiding this comment
The reason will be displayed to describe this comment to others. Learn more.
I guess we can ignore this now as we are switching to CLI soon?
Codecov ReportAll modified and coverable lines are covered by tests ✅
Additional details and impacted files@@ Coverage Diff @@
## main #196 +/- ##
=======================================
Coverage 97.37% 97.37%
=======================================
Files 25 25
Lines 1523 1523
=======================================
Hits 1483 1483
Misses 40 40 ☔ View full report in Codecov by Sentry. |
@michellewang - Just need to merge extractor changes |
There was a problem hiding this comment.
Choose a reason for hiding this comment
The reason will be displayed to describe this comment to others. Learn more.
Why did you add back the v
to the version-level folder? 😢 The runscripts don't use it anymore
Also in the future you could have different branches in your fork so that you can have different PRs that do different things! |
Yes I keep forgetting to check what PRs are for what :(
Because of presence of hardlinks, I cannot change that for tractofolow :( - so want to be consistent for QPN pipelines. Otherwise I will need to reprocess entire tractoflow - which would take a while. Also I believe Brent mentioned it's not ideal to have directory names starting with a number, so I thought we dropped the plan.. is that not the case? |
I don't think having file/directory names start with a number is a big issue -- from a quick online search it seems like something people do, so if it does cause issues it must be pretty niche. Also I should point out that we have plenty of directories starting with numbers in PPMI since the names are just the participant IDs! |
Yeah which is why I didn't mind the change as it's personal preference. But it's creating a mess for QPN / tractoflow data, so then it no longer seems worth to force this minor change... |
* Add docs built with Sphinx (nipoppy#209) * switch from sphinx-autoapi to sphinx-autodoc2 extension to use markdown syntax in docstrings * add installation instructions * add Contributing guide and include README in index page * add notes about associated workflow class in CLI subcommand pages * add preliminary work on Quickstart page * add schema reference and link to it from Quickstart page * add tests/docs sections to contribution guide * minor changes * rename nipoppy_new -> nipoppy_cli * rename sample_data -> examples * update gitignore after rename * remove old files that didn't get merged correctly * add Codecov badge to README * move docs to correct directory * more fixes * revert back to sphinx-autoapi since sphinx-autodoc2 does not seem maintained anymore * add additional schemas (LayoutConfig, BoutiquesConfig) * add description string to all Pydantic model fields * update/move README * make README images appear correctly in both GitHub repo and Sphinx website * modify README text to match figure * remove empty tutorial section * mention "nipoppy_cli" in install instructions and add next steps in some pages * move images under docs/source/_static/img * reduce term references in installation instructions, and point to GitHub issues under Troubleshooting * set up ReadtheDocs * delete old README * fix ReadtheDocs config and move new README file to top * update path to README in pyproject.toml * update ReadtheDocs config to install the package so that the CLI reference works * fix typo in .readthedocs.yml and clean up .gitignore * update README after Nikhil comments * minor change * [DOC] make install command more robust (nipoppy#215) make install command more robust * [DOC] fix edit button and unpin doc dependencies (nipoppy#216) * unpin doc dependencies * add details furo config * maybe if I commited my stash things would show up in the PR * Apply suggestions from code review Co-authored-by: Michelle Wang <tomichellewang@gmail.com> * Update nipoppy_cli/docs/source/conf.py --------- Co-authored-by: Michelle Wang <tomichellewang@gmail.com> * [DOC] build schema before doc and ignore autodoc related warnings (nipoppy#218) * build schema before doc * fail on warnings * ignore warnings * fixed bugs related to maget-brain data prep and tracking options (nipoppy#196) * moving sample_global_configs.json and tree.json to the nipoppy subdir * fixed import paths after refactor * fixed import paths after refactor * refactored and cleaned up mriqc run script * refactored and cleaned up mriqc run script * Started tracker refactoring to use doughnut * added a catalog function to identify new proc-participants from bagel and doughnut * added a catalog function to identify new proc-participants from bagel and doughnut * added custom (new subjects only) pybids layout (sqldb) generation * fixed sessions loop and incorporated utils.load_status into run_tracker.py * fixed import path errors and incorporated tracker.py for status flags * fixed global var imports and logging levels * updated sample_run_nipoppy to set log-level and prototypical mriqc run with tracker * updated bids_tracker to match proc_pipe tracker schema * minor fixes and comments * fixed Pandas future warning on setting an item of incompatible dtype * fixed another Pandas future warning on setting an item of incompatible dtype * 1) Updated mriqc and fmirpre run scripts to bind complete bids_dir path, 2) added sqldb generation with ignore list for subjects and datatype+acq, 3) updated sample_run_nipoppy.py to show the these two functionalities. * fixed fmriprep pytest * fixed codespell * fixed NM filename pattern * added functionality to custom map participant_id to bids_id * fixed minor codespell errors * fixed errors from previous merge conflict resolution * updated sample_run_nipoppy to run tractoflow, renamed and moved check_dicom_status, and fixed minor bugs and logging in run_tracker * fixed session_id typo and optimized tracker runs * fixed FS utils function * added acq option to all trackers * added acq option to all trackers (fixed merge conflict) * fixed typos and added support for acq tag in mriqc tracker * fixed tractoflow subject dir path issues and added INCOMPLETE status * refactored FS extraction using brainload package * fixed hemisphere naming * fixed aseg extraction * remove version (i.e. v) prefix from run scripts to match trackers * fixed a typo and removed legacy FS extractor script * fixed bugs related to maget-brain data prep and tracking options * minor changes to extractor scripts * minor changes to extractor scripts * [MAINT] use official fids (nipoppy#220) use official fids * rename README.md -> README_PPMI.md --------- Co-authored-by: Remi Gau <remi_gau@hotmail.com> Co-authored-by: Nikhil Bhagwat <nikhil153@users.noreply.github.com>
No description provided.