Skip to content

Commit

Permalink
Merge pull request #120 from initze/scikit-image-req
Browse files Browse the repository at this point in the history
Scikit image req
  • Loading branch information
initze committed May 2, 2024
2 parents debe60a + 6a3a8b8 commit 594a1f8
Show file tree
Hide file tree
Showing 3 changed files with 12 additions and 3 deletions.
9 changes: 9 additions & 0 deletions README.md
Original file line number Diff line number Diff line change
Expand Up @@ -9,6 +9,15 @@ This will pull the CUDA 12 version of pytorch. If you are running CUDA 11, you n

gdal incl. gdal-utilities (preferably version >=3.6) need to be installed in your environment, e.g. with conda

### Additional packages
#### cucim
You can install cucim to speed up the postprocessing process. cucim will use the gpu to perform binary erosion of edge artifacts, which runs alot faster than the standard CPU implementation of scikit-learn.

`pip install --extra-index-url=https://pypi.nvidia.com cucim-cu11==24.4.*`

Installation for other cuda versions see here:

https://docs.rapids.ai/install
## System and Data Setup

### Option 1 - Singularity container
Expand Down
2 changes: 1 addition & 1 deletion pyproject.toml
Original file line number Diff line number Diff line change
Expand Up @@ -27,7 +27,7 @@ dependencies = [
"geemap==0.29.6",
"eemont==0.3.6",
"joblib==1.3",
"scikit-image>=0.23.2",
"scikit-image>=0.22.0",
"h5py>=3.11.0",
"ipython>=8.23.0",
"cython>=3.0.10",
Expand Down
4 changes: 2 additions & 2 deletions src/thaw_slump_segmentation/scripts/process_03_ensemble.py
Original file line number Diff line number Diff line change
Expand Up @@ -79,7 +79,7 @@
'minimum_mapping_unit': args.ensemble_mmu,
'delete_binary': True,
'try_gpu': args.try_gpu, # currently default to CPU only
'gpu' : 0,
'gpu' : args.use_gpu,
}

# Check for finalized products
Expand All @@ -93,7 +93,7 @@
print(f'Start running ensemble with {N_JOBS} jobs!')
print(f'Target ensemble name:', kwargs_ensemble['ensemblename'])
print(f'Source model output', kwargs_ensemble['modelnames'])
_ = Parallel(n_jobs=N_JOBS)(delayed(create_ensemble_v2)(image_id=process.iloc[row]['name'], **kwargs_ensemble) for row in tqdm(range(len(process.iloc[:N_IMAGES]))))
#_ = Parallel(n_jobs=N_JOBS)(delayed(create_ensemble_v2)(image_id=process.iloc[row]['name'], **kwargs_ensemble) for row in tqdm(range(len(process.iloc[:N_IMAGES]))))

# # #### run parallelized batch

Expand Down

0 comments on commit 594a1f8

Please sign in to comment.