Overview • Citation • Setup • Experiments • Evaluation • Credits
📦continual_adaptation_ucdr
┣ 📂cfg # configuration
┃ ┣ 📂conda # conda enviornment file
┃ ┣ 📂dataset # dataset configuration
┃ ┣ 📂docker # docker files
┃ ┣ 📂env # enviornment configuration
┃ ┣ 📂eval # evaluation configuration
┃ ┣ 📂exp # network training experiments configuration
┃ ┗ 📂generate # checkpoint to lables
┣ 📂docs # images for readme
┣ 📂results # empty result folder
┃ ┣ 📂evals # evaluation results
┃ ┣ 📂labels_generated # add here pregenerated pseudo labels
┃ ┣ 📂learning # add here pretrained model checkpoints
┣ 📂scripts # scripts
┃ ┣ 📜eval_model.py # evaluation of model checkpoint
┃ ┣ 📜eval_pseudo_labels.py # evaluation of folder containing pseudo labels
┃ ┣ 📜generate.py # model checkpoint to lables
┃ ┣ 📜raycast_folder.py # raycast mesh exported from kimera semantics
┃ ┗ 📜train.py # adapt network
┣ 📂ucdr # learning code
┃ ┣ 📂callbacks
┃ ┣ 📂datasets
┃ ┣ 📂kimera_semantics
┃ ┣ 📂lightning
┃ ┣ 📂loss
┃ ┣ 📂models
┃ ┣ 📂pseudo_label
┃ ┣ 📂task
┃ ┣ 📂utils
┃ ┣ 📂visu
Jonas Frey, Hermann Blum, Francesco Milano, Roland Siegwart, Cesar Cadena, Continual Learning of Semantic Segmentation using Complementary 2D-3D Data Representations”, in IEEE Robotics and Automation Letters(RA-L), 2022.
@inproceedings{frey2022traversability,
author={Jonas Frey and Hermann Blum and Francesco Milano and Roland Siegwart and Cesar Cadena},
journal={under review: IEEE Robotics and Automation Letters(RA-L},
title={Continual Adaptation of Semantic Segmentation using Complementary 2D-3D Data Representations},
year={2022}
}
mkdir -p ~/git/
git clone git@github.com:JonasFrey96/continual_adaptation_ucdr.git
We provide a conda environment file and docker container to run the code.
It is tested using torch==1.10
, pytorch-lightning==1.6.4
with CUDA11.3
.
We recommend using mamba for installation and assume you have a working conda installation.
- Install mamba
conda activate base
conda install mamba -n base -c conda-forge
- Correct conda settings
conda config --set safety_checks enabled
conda config --set channel_priority false
- Install and activate the ucdr environment
cd ~/git/continual_adaptation_ucdr
mamba env create -f cfg/conda/ucdr.yaml
conda activate ucdr
- Install the ucdr repository
cd ~/git/continual_adaptation_ucdr
pip3 install -e ./
All configuration files are within cfg/env
, cfg/exp
and cfg/eval
.
Within the env
configure the enviroment configuration for your_machine is stored.
To identify the correct env configuration add the name of your_machine to your ~/.bashrc
.
echo 'export ENV_WORKSTATION_NAME="your_machine"' >> ~/.bashrc
source ~/.bashrc
Create a file in cfg/env/your_machine.yaml
with the following content (same as cfg/env/env.yaml
):
base: results/learning # will create a log in this folder for each run. (global or relative to the continual_adaptation_ucdr)
labels_generic: results/labels_generated # where to find pseudo labels. (global or relative to the continual_adaptation_ucdr)
scannet: /path_to/scannet # (global path)
In the experiment folder all experiments to reproduce the results within the paper are provided.
Pass the relative path to the defined experiment yaml-file to the scripts/train.py
to start training.
You may want to adapt the neptune_project_name
to log directly to your neptune.ai account.
Pass the relative path to the defined evaluation yaml-file to the scripts/eval.py
to start evaluation.
All models that can be generated using the experiments can be downloaded here.
Extract the data to your choosen base
within the env
configuration.
By defaults this is results/learning
.
python scripts/train.py --exp=pred_1/scannet25_pretrain.yaml
Update the global_checkpoint_load
in cfg/generate/pred1.yaml
if you are not using the pretrained network.
python scripts/generate.py --generate=pred1.yaml
Will create a folder defined perviously in labels_generic
in the enviornment yaml file.
TODO
description of using setting up kimera semantics and the raytracing.
Use the provided experiment file in pred_2_r00
where r00
inidcates the replay ratio used and 00
corresponds to the finetuning strategy.
Update the path to the pretrained model in checkpoint_load
if you are not using the pretrained model.
python scripts/train.py --exp=pred_2_r00/scene0000_r00.yaml
Generate Score for 1-Pseudo Adap:
python scripts/eval_pseudo_labels.py --pseudo_label_idtf=labels_individual_scenes_map_2 --mode=val --scene=scene0000,scene0001,scene0002,scene0003,scene0004
python scripts/eval_model.py --eval=eval_pred_1.yaml
python scripts/eval_model.py --eval=eval_pred_2_00.yaml
python scripts/eval_model.py --eval=eval_pred_2_02.yaml
python scripts/eval_model.py --eval=eval_pred_2_05.yaml
- The authors of Fast-SCNN
- TRAMAC implementing Fast-SCNN in PyTorch
- The authors of ORBSLAM2
- People at http://continualai.org for the inspiration