Skip to content

JonasFrey96/continual_adaptation_ucdr

Folders and files

NameName
Last commit message
Last commit date

Latest commit

 

History

8 Commits
 
 
 
 
 
 
 
 
 
 
 
 
 
 

Repository files navigation

Continual Adaptation of Semantic Segmentation Using Complementary 2D-3D Data Representations


Contains code, checkpoints, documentation and installation instructions for RA-L paper.

OverviewCitationSetupExperimentsEvaluationCredits

Overview

📦continual_adaptation_ucdr  
 ┣ 📂cfg                    # configuration
 ┃ ┣ 📂conda                   # conda enviornment file 
 ┃ ┣ 📂dataset                 # dataset configuration
 ┃ ┣ 📂docker                  # docker files
 ┃ ┣ 📂env                     # enviornment configuration
 ┃ ┣ 📂eval                    # evaluation configuration
 ┃ ┣ 📂exp                     # network training experiments configuration
 ┃ ┗ 📂generate                # checkpoint to lables
 ┣ 📂docs                   # images for readme
 ┣ 📂results                # empty result folder
 ┃ ┣ 📂evals                   # evaluation results
 ┃ ┣ 📂labels_generated        # add here pregenerated pseudo labels
 ┃ ┣ 📂learning                # add here pretrained model checkpoints
 ┣ 📂scripts                # scripts
 ┃ ┣ 📜eval_model.py           # evaluation of model checkpoint
 ┃ ┣ 📜eval_pseudo_labels.py   # evaluation of folder containing pseudo labels
 ┃ ┣ 📜generate.py             # model checkpoint to lables
 ┃ ┣ 📜raycast_folder.py       # raycast mesh exported from kimera semantics
 ┃ ┗ 📜train.py                # adapt network 
 ┣ 📂ucdr                   # learning code
 ┃ ┣ 📂callbacks
 ┃ ┣ 📂datasets
 ┃ ┣ 📂kimera_semantics
 ┃ ┣ 📂lightning
 ┃ ┣ 📂loss
 ┃ ┣ 📂models
 ┃ ┣ 📂pseudo_label
 ┃ ┣ 📂task
 ┃ ┣ 📂utils
 ┃ ┣ 📂visu

Citation

Jonas Frey, Hermann Blum, Francesco Milano, Roland Siegwart, Cesar Cadena, Continual Learning of Semantic Segmentation using Complementary 2D-3D Data Representations”, in IEEE Robotics and Automation Letters(RA-L), 2022.

@inproceedings{frey2022traversability,
  author={Jonas Frey and Hermann Blum and Francesco Milano and Roland Siegwart and Cesar Cadena},
  journal={under review: IEEE Robotics and Automation Letters(RA-L},
  title={Continual Adaptation of Semantic Segmentation using Complementary 2D-3D Data Representations},
  year={2022}
}

Setup

Clone the Repository

mkdir -p ~/git/
git clone git@github.com:JonasFrey96/continual_adaptation_ucdr.git

We provide a conda environment file and docker container to run the code. It is tested using torch==1.10 , pytorch-lightning==1.6.4 with CUDA11.3.

Setting up Conda Environment

We recommend using mamba for installation and assume you have a working conda installation.

  1. Install mamba
conda activate base
conda install mamba -n base -c conda-forge
  1. Correct conda settings
conda config --set safety_checks enabled
conda config --set channel_priority false
  1. Install and activate the ucdr environment
cd ~/git/continual_adaptation_ucdr
mamba env create -f cfg/conda/ucdr.yaml
conda activate ucdr
  1. Install the ucdr repository
cd ~/git/continual_adaptation_ucdr
pip3 install -e ./

Configuration

All configuration files are within cfg/env, cfg/exp and cfg/eval.

[env] Environment

Within the env configure the enviroment configuration for your_machine is stored. To identify the correct env configuration add the name of your_machine to your ~/.bashrc.

echo 'export ENV_WORKSTATION_NAME="your_machine"' >> ~/.bashrc
source  ~/.bashrc

Create a file in cfg/env/your_machine.yaml with the following content (same as cfg/env/env.yaml):

base: results/learning # will create a log in this folder for each run. (global or relative to the continual_adaptation_ucdr)
labels_generic: results/labels_generated # where to find pseudo labels. (global or relative to the continual_adaptation_ucdr)
scannet: /path_to/scannet # (global path) 

[exp] Experiment

In the experiment folder all experiments to reproduce the results within the paper are provided. Pass the relative path to the defined experiment yaml-file to the scripts/train.py to start training. You may want to adapt the neptune_project_name to log directly to your neptune.ai account.

[eval] Evaluation

Pass the relative path to the defined evaluation yaml-file to the scripts/eval.py to start evaluation.

Pre-trained models

All models that can be generated using the experiments can be downloaded here. Extract the data to your choosen base within the env configuration. By defaults this is results/learning.

Experiments

1. Pretrain the model

python scripts/train.py --exp=pred_1/scannet25_pretrain.yaml

2. Generate pseudo labels

Update the global_checkpoint_load in cfg/generate/pred1.yaml if you are not using the pretrained network.

python scripts/generate.py --generate=pred1.yaml

Will create a folder defined perviously in labels_generic in the enviornment yaml file.

  • TODO description of using setting up kimera semantics and the raytracing.

3. Network adaptation

Use the provided experiment file in pred_2_r00 where r00 inidcates the replay ratio used and 00 corresponds to the finetuning strategy. Update the path to the pretrained model in checkpoint_load if you are not using the pretrained model.

python scripts/train.py --exp=pred_2_r00/scene0000_r00.yaml

Evaluation

Pseudo Labels

Generate Score for 1-Pseudo Adap:

python scripts/eval_pseudo_labels.py --pseudo_label_idtf=labels_individual_scenes_map_2 --mode=val --scene=scene0000,scene0001,scene0002,scene0003,scene0004

Network

python scripts/eval_model.py --eval=eval_pred_1.yaml
python scripts/eval_model.py --eval=eval_pred_2_00.yaml
python scripts/eval_model.py --eval=eval_pred_2_02.yaml
python scripts/eval_model.py --eval=eval_pred_2_05.yaml

Credits

About

No description, website, or topics provided.

Resources

Stars

Watchers

Forks

Releases

No releases published

Packages

No packages published