Skip to content

habicrowd/HabiCrowd

Folders and files

NameName
Last commit message
Last commit date

Latest commit

 

History

75 Commits
 
 
 
 
 
 
 
 

Repository files navigation

Update

  • As of 7th August 2023, we have included 424 more scenes to HabiCrowd, increasing the number of configured scenes to 480.
  • As of 10th August 2023, we introduced ImageNav to HabiCrowd.

Creative Commons License
This work is licensed under a Creative Commons Attribution-NonCommercial-ShareAlike 4.0 International License. For more information, see HM3D license and Habitat Terms of Use.

Table of contents

  1. Overview
  2. ObjectNav
  3. PointNav
  4. ImageNav

HabiCrowd

This repository contains the code for running HabiCrowd.

Overview

HabiCrowd, a new dataset and benchmark for crowd-aware visual navigation that surpasses other benchmarks in terms of human diversity and computational utilization. HabiCrowd can be utilized to study crowd-aware visual navigation tasks. A notable feature of HabiCrowd is that our crowd-aware settings is 3D, which is scarcely studied by previous works.

scene_1.mp4

ObjectNav

In ObjectNav, an agent is placed at a random starting point and orientation in an unknown environment and instructed to go to an instance of an object category ('find a chair'). There is no map of the world provided, thus the agent must rely only on its sensory input to navigate.

The agent has an RGB-D camera as well as a (noiseless) GPS+Compass sensor. The GPS+Compass sensor determines the agent's current location and orientation in relation to the beginning of the episode. In simulation, we try to match the camera specifications (field of view, resolution) to the Azure Kinect camera, although this work does not include any injected sensor noise.

Dataset

We use 480 scenes in the Habitat-Matterport3D (HM3D) dataset with train/val/test splits on 400/40/40. We use 6 object goal categories: chair, couch, potted plant, bed, toilet and tv as traditional ObjectNav in Habitat simulator.

Starter

To begin with, install the Habitat-Sim. Install our forked version Habitat-Lab, where we have developed our baselines as well as human dynamics. You can install Habitat-Sim using the custom Conda package for habitat challenge 2022 with:

conda install -c aihabitat habitat-sim-challenge-2022

Also ensure that habitat-baselines is installed when installing Habitat-Lab by using:

python setup.py develop --all

You will find further information for installation in the the forked Github repositories. 2. Download the HM3D dataset following the instructions here. After downloading extract the dataset to folder habitat-challenge/habitat-challenge-data/data/scene_datasets/hm3d/ folder (this folder should contain the .glb files from HM3D). Note that the habitat-lab folder is the habitat-lab repository folder. The data also needs to be in the HabiCrowd/ in this repository. Move the downloaded folder to dataset folder.

  1. An example on how to train a simple DD-PPO model (for other model, you should use appropriate baseline name) can be found in habitat-lab/habitat_baselines/rl/ddppo. See the corresponding README in habitat-lab for how to adjust the various hyperparameters, save locations, visual encoders and other features.

    1. First, navigate to our forked Habitat-Lab version. We expect the structure folder as follows:

      |- HabiCrowd
      |- habitat-lab
      
    2. To run on a single machine use the following script from habitat-lab directory:

      #/bin/bash
      
      export GLOG_minloglevel=2
      export MAGNUM_LOG=quiet
      
      set -x
      python -u -m torch.distributed.launch \
          --use_env \
          --nproc_per_node 1 \
          habitat_baselines/run.py \
          --exp-config ../HabiCrowd/dataset/configs/baseline_<name>.yaml \
          --run-type train \
          BASE_TASK_CONFIG_PATH ../HabiCrowd/dataset/configs/challenge_crowdnav.local.rgbd.yaml \
          TASK_CONFIG.DATASET.SCENES_DIR ../HabiCrowd/dataset/crowd-nav-data/data/scene_datasets/ \
          TASK_CONFIG.DATASET.SPLIT 'train' \
          TENSORBOARD_DIR ./tb \
          CHECKPOINT_FOLDER ./checkpoints \
          LOG_FILE ./train.log
    3. There is also an example of running the code distributed on a cluster with SLURM. While this is not necessary, if you have access to a cluster, it can significantly speed up training. To run on multiple machines in a SLURM cluster run the following script: change #SBATCH --nodes $NUM_OF_MACHINES to the number of machines and #SBATCH --ntasks-per-node $NUM_OF_GPUS and $SBATCH --gres $NUM_OF_GPUS to specify the number of GPUS to use per requested machine.

      #!/bin/bash
      #SBATCH --job-name=ddppo
      #SBATCH --output=logs.ddppo.out
      #SBATCH --error=logs.ddppo.err
      #SBATCH --gres gpu:1
      #SBATCH --nodes 1
      #SBATCH --cpus-per-task 10
      #SBATCH --ntasks-per-node 1
      #SBATCH --mem=60GB
      #SBATCH --time=12:00
      #SBATCH --signal=USR1@600
      #SBATCH --partition=dev
      
      export GLOG_minloglevel=2
      export MAGNUM_LOG=quiet
      
      export MASTER_ADDR=$(srun --ntasks=1 hostname 2>&1 | tail -n1)
      
      set -x
      srun python -u -m habitat_baselines.run \
          --exp-config ../HabiCrowd/dataset/configs/baseline_<name>.yaml \
          --run-type train \
          BASE_TASK_CONFIG_PATH ../HabiCrowd/dataset/configs/challenge_crowdnav.local.rgbd.yaml \
          TASK_CONFIG.DATASET.SCENES_DIR ../HabiCrowd/dataset/crowd-nav-data/data/scene_datasets/ \
          TASK_CONFIG.DATASET.SPLIT 'train' \
          TENSORBOARD_DIR ./tb \
          CHECKPOINT_FOLDER ./checkpoints \
          LOG_FILE ./train.log
    4. The preceding two scripts are based off ones found in the habitat_baselines/ddppo.

  2. The checkpoint specified by $PATH_TO_CHECKPOINT can evaluated by SPL and other measurements by running the following command:

    python -u -m habitat_baselines.run \
        --exp-config ../habitat-challenge/configs/ddppo_objectnav.yaml \
        --run-type eval \
        BASE_TASK_CONFIG_PATH ../HabiCrowd/dataset/configs/challenge_crowdnav.local.rgbd.yaml \
        TASK_CONFIG.DATASET.DATA_PATH ../HabiCrowd/dataset/crowd-nav/crowdnav_hm3d_v2.1/{split}/{split}.json.gz 
        TASK_CONFIG.DATASET.SCENES_DIR ../HabiCrowd/dataset/crowd-nav-data/data/scene_datasets/ \
        EVAL_CKPT_PATH_DIR $PATH_TO_CHECKPOINT \
        TASK_CONFIG.DATASET.SPLIT val

PointNav

Follow the instructions from Habitat-Lab. First, you need to acquire HM3D PointNav dataset in the link.

We still use the forked version Habitat-Lab. To run on a single machine use the following script from habitat-lab directory:

    python -u -m habitat_baselines.run \
    --config-name=pointnav/baseline_<name>.yaml

To test on a single machine use the following script from habitat-lab directory:

    python -u -m habitat_baselines.run \
    --config-name=pointnav/baseline_<name>.yaml \
    habitat_baselines.evaluate=True

ImageNav

Follow the instructions from Habitat-Lab. First, you need to acquire HM3D_v0.2 Instance image goal navigation dataset in the link. Note that, you need to download HM3D_v0.2 for ImageNav benchmark.

Similar to the above task, we just need the change the config to instance_imagenav:

    python -u -m habitat_baselines.run \
    --config-name=instance_imagenav/baseline_<name>.yaml

To test on a single machine use the following script from habitat-lab directory:

    python -u -m habitat_baselines.run \
    --config-name=instance_imagenav/baseline_<name>.yaml \
    habitat_baselines.evaluate=True

Acknowledgments

We thank the teams behind Habitat-Matterport3D datasets, Habitat-Challenge-2022, and Habitat-Lab.

About

HabiCrowd, a new dataset and benchmark for crowd-aware visual navigation that surpasses other benchmarks in terms of human diversity and computational utilization.

Topics

Resources

Stars

Watchers

Forks