Commit
This commit does not belong to any branch on this repository, and may belong to a fork outside of the repository.
Bring metrics in PAM50 anatomical dimensions in `sct_process_segmenta…
…tion` (#3977) * add sript * add linear interpolation * intergrate normalization in process seg * Remove unused imports * Clarify comments * Clarify iterations over dicts * Clarify names of nb_slices variables * Set default value for `-vert` flag. * Get abs path for arguments.vertfile * Rename fname_vert_level to fname_vert_levels * Update authors * Clarify variable names * Add sct_progress_bar to monitor metrics aggregation * Add TODO if len(metrics_inter) == len(slices_PAM50) * Remove empty lines at the file end * remove default value for -vert * change fname levels for level * remove condition if len == * fix lint * change suffix for PAM50 in csv * put NAN if key is length * Create PAM50.csv filename in more robust way (it was failing for paths including `.` like /home/GRAMES.POLYMTL.CA) * change image orientation to RPI * remove levels 49 and higher * fix parser for normalize¸ * add testing of normalize PAM50 * add -normalize PAM50 in batch processing * change row for PAM50 csv file * add cahched file * remove -vert argument * remove -vert for sct_process_segmentation with -normalize PAM50 * seperate -normalize PAM50 into different arg * remove PAM50 description from arg -normalize * remove trailing whitespace * remove PAM50 option in SepreateNormArgs * fetch arguments.normalize_PAM50 argument and change in condition * if -normalize-pam50 run using pam50 metrics, output one csv file only * remove unnecessary raise * remove whitespace * remove extra \n * remove indent block with PAM50 condition * continuation underline fix * remove whitespace lines * change -normalize PAM50 argument for -normalize-PAM50 1 * remove filename_pam50 sinec only one .csv file is output now * `test_cli_sct_process_segmentation.py`: Fixup test This commit fixes 2 issues: - The index # didn't match the IS slice #. - The hardcoded value was too precise. * `metrics_to_PAM50.py`: Remove `levels_2_skip` variable The purpose of this commit is to reduce the number of lines of code, which saves readers some time/effort. Rather than storing levels[0] and levels[-1] in a variable, we can just access those values directly from `levels` itself. We also sort `levels`, that way we don't need to use `min` or `max` -- we can just use [0] and [-1]. * `metrics_to_PAM50.py`: Iterate through slices, not levels The purpose of this commit is to pre-compute the slice variables, which saves some lines of code inside the loop. Also, later on, this will help us compute `scales` outside the loop, which is important for letting us combine the two loops. (NB: The index `[::len(levels)-1]` looks complicated, but don't worry, I'm going to remove that indexing later on when I combine the two loops.) * `metrics_to_PAM50.py`: Replace `nb_slices_<>` with `len()` calls This change is a bit nitpicky, and this change is probably unnecessary, but I think that `len()` is pretty self-explanatory -- I don't know that we need an entire variable to represent it. This saves a few lines of code. * `metrics_to_PAM50.py`: Compute `scale_mean` outside the loop This change lets us combine the two loops. * `metrics_to_PAM50.py`: Use `metrics_inter` for both loops This change also lets us combine the two loops. * `metrics_to_PAM50.py`: Merge two loops into one This change saves us from having to duplicate the same instructions across two loops. It also allows us to more easily compare the different cases (levels[0], levels[1:-1], levels[-1]), since the variable assignments are grouped together. * `metrics_to_PAM50.py`: Replace `np.empty`+`np.fill` with `np.full` This saves us a line of code! Source: https://stackoverflow.com/a/26289777 * `metrics_to_PAM50.py`: Use dictionary comprehensions This saves us a quite a bit of space, too! * `metrics_to_PAM50.py`: Combine `levels` into single step This is another space-saving commit. * `metrics_to_PAM50.py`: Move dict init closer to loop This helps to group related code: - The dictionary initialization - The loop that fills up the dictionray * `metrics_to_PAM50.py`: Simplify interpolation by extracting out `diff` This helps us to avoid using a bunch of `len()` calls, and makes the conditionals a lot easier to read. * `download.py`: Use `sct_testing_data-PR3977.zip` This lets us temporarily test the new release, prior to making any actual changes to the `sct_testing_data` repo * `batch_processing.sh`: Use new option (`-normalize-PAM50`) * Change `csa_pam50_PAM50.csv` -> `csa_pam50.csv` * add missing - for -nromalize-PAM50 * remove -vertfile error * add error if normalize-PAM50 and vertfile doesn't exists * move initialization of normalize_pam50 arg for vertfile check * add function get all available vertebral levels * force levels of PAM50 or native space if -normalize-PAM50 1 * check if levels is empty before setting the value * use image.getNonZeroValue instead of get_all_vertebral_level * add getNonZeroValues * remove function get_all_vertebral_levels * remove import of get_all_vertebral_level * add updated cached results for csa_pam50.csv * update row for csa_pam50.csv with updated chached results * change row of csv file to check * change error to FileNotFoundError * Restore old permissions (100644 -> 100755) * `download.py`: Replace dummy link with `r20230207` release --------- Co-authored-by: valosekj <jan.valosek@upol.cz> Co-authored-by: Joshua Newton <joshuacwnewton@gmail.com>
- Loading branch information