These instructions assumes a HPC using SLURM scheduling.
Adapt instructions to your own HPC as necessary.
module load Anaconda3
eval "$(conda shell.bash hook)"
conda config --prepend channels conda-forge
conda create -n bikedna_min --strict-channel-priority geopandas pyarrow pyyaml
or alternatively, upload the environment_minimal.yml
file in the feature_matching_hpc
folder:
module load Anaconda3
eval "$(conda shell.bash hook)"
conda env create --file=environment_minimal.yml
- Navigate to the
feature_matching_hpc
folder in a terminal and run:
python setup_hpc_folders.py
Use for example scp to upload the feature_matching_hpc
folder to the HPC:
scp -r /Users/myuser/../bikedna_denmark/feature_matching_hpc user@host:/home/user
Navigate to the feature_matching_hpc
folder on the HPC and run:
sbatch scripts/fm.job
Once the job is completed:
- download the data in the
results
folder and place them inbikedna_denmark/feature_matching_hpc/results/
on your own machine - download the data in the
processed
folder and place them inbikedna_denmark/feature_matching_hpc/processed/
on your own machine - navigate to the folder
feature_matching_hpc
on your own machine, activate the conda environment, and run:
python export_hpc_results.py
- run the
3b_extrinsic_analyis_feature_matching
notebook as normal to summarize results, analyze local success rates, produce plots, etc.