Skip to content
This repository has been archived by the owner on Oct 31, 2023. It is now read-only.

facebookresearch/habitat-challenge

 
 

Repository files navigation


Habitat Rearrangement Challenge 2022

This repository contains the starter code for the Habitat 2022 rearrangement challenge, and training and evaluation setups. For an overview of habitat-challenge, visit aihabitat.org/challenge/2022_rearrange.

Task: Object Rearrangement

In the object rearrangement task, a Fetch robot is randomly spawned in an unknown environment and asked to rearrange 1 object from an initial to desired position – picking/placing it from receptacles (counter, sink, sofa, table), opening/closing containers (drawers, fridges) as necessary. A map of the environment is not provided and the agent must only use its sensory input to navigate and rearrange.

The Fetch robot is equipped with an egocentric 256x256 90-degree FoV RGBD camera on the robot head. The agent also has access to idealized base-egomotion giving the relative displacement and angle of the base since the start of the episode. Additionally, the robot has proprioceptive joint sensing providing access to the current robot joint angles.

For details about the agent, dataset, and evaluation, see the challenge website: aihabitat.org/challenge/2022_rearrange.

If you have any issues, open a GitHub issue on this repository.

Participation Guidelines

Participate in the contest by registering on the in the soon to be released EvalAI page and creating a team. Participants will upload docker containers with their agents that are evaluated on an AWS GPU-enabled instance. Before pushing the submissions for remote evaluation, participants should test the submission docker locally to ensure it is working. Instructions for training, local evaluation, and online submission are provided below.

Installing Habitat-Sim and Downloading data

First setup Habitat Sim in a new conda environment so you can download the datasets to evaluate your models locally.

  1. Prepare your conda env:

    # We require python>=3.7 and cmake>=3.10
    conda create -n habitat python=3.7 cmake=3.14.0
    conda activate habitat
  2. Install Habitat-Sim using our custom Conda package for habitat challenge 2022 with:

    conda install -y habitat-sim-rearrange-challenge-2022  withbullet  headless -c conda-forge -c aihabitat
    

    On MacOS, omit the headless argument.
    Note: If you face any issues related to the GLIBCXX version after conda installation, please uninstall this conda package and install the habitat-sim repository from source (more information here). Make sure that you are using the hab2_challenge_2022 tag and not the stable branch for your installation.

  3. Clone the challenge repository:

    git clone -b rearrangement-challenge-2022 https://github.com/facebookresearch/habitat-challenge.git
    cd habitat-challenge
  4. Download the episode datasets, scenes, and all other assets with

    python -m habitat_sim.utils.datasets_download --uids rearrange_task_assets --data-path <path to download folder>
    

    If this step was successful, you should see the train, val and minival splits in the <path to download folder>/datasets/replica_cad/rearrange/v1/{train, val, minival} folders respectively.

  5. Now, create a symlink to the downloaded data in your habitat-challenge repository:

    ln -s <absolute path to download folder> data
    

Local Docker Evaluation

In these steps, we will evaluate a sample agent in Docker. We evaluate in Docker because EvalAI requires submitting a Docker image to run your agent on the leaderboard. Since these steps depend on nvidia-docker v2, they will only run on Linux; no Windows or MacOS.

  1. Implement your own agent or try one of ours. We provide an agent in agents/random_agent.py that takes random actions.

  2. Install nvidia-docker v2 following instructions here: https://docs.nvidia.com/datacenter/cloud-native/container-toolkit/install-guide.html#docker. Note: only supports Linux; no Windows or MacOS.

  3. Modify the provided Dockerfile if you need custom modifications. Let’s say your code needs pytorch==1.9.0, these dependencies should be pip installed inside the Docker conda environment called habitat. Below is an example Dockerfile with pip installing custom dependencies.

    FROM fairembodied/habitat-challenge:habitat_rearrangement_2022_base_docker
    
    # install dependencies in the habitat conda environment
    RUN /bin/bash -c ". activate habitat; pip install torch==1.9.0"
    
    ADD agent.py /agent.py

    Build your docker container using:

    docker build . --file docker/hab2.Dockerfile  -t rearrange_submission

    Note #1: you may need sudo privileges to run this command.

    Note #2: Please make sure that you keep your local version of fairembodied/habitat-challenge:habitat_rearrangement_2022_base_docker image up to date with the image we have hosted on dockerhub. This can be done by pruning all cached images, using:

    docker system prune -a
    
  4. Evaluate your docker container locally:

    bash ./scripts/test_local.sh --docker-name rearrange_submission

    If the above command runs successfully you will get an output similar to:

    2022-07-14 22:03:05,811 Initializing dataset RearrangeDataset-v0
    2022-07-14 22:03:05,811 Rearrange task assets are not downloaded locally, downloading and extracting now...
    2022-07-14 22:03:05,812 Downloaded and extracted the data.
    2022-07-14 22:03:05,818 initializing sim RearrangeSim-v0
    2022-07-14 22:03:06,214 Initializing task RearrangeCompositeTask-v0
    2022-07-14 22:03:08,822 object_to_goal_distance/30: 4.302241203188896
    2022-07-14 22:03:08,822 robot_force/accum: 657186.0750969499
    2022-07-14 22:03:08,823 robot_force/instant: 657193.0133517366
    2022-07-14 22:03:08,823 force_terminate: 0.05
    2022-07-14 22:03:08,823 robot_collisions/total_collisions: 0.0
    2022-07-14 22:03:08,823 robot_collisions/robot_obj_colls: 0.0
    2022-07-14 22:03:08,823 robot_collisions/robot_scene_colls: 0.0
    2022-07-14 22:03:08,823 robot_collisions/obj_scene_colls: 0.0
    2022-07-14 22:03:08,823 ee_to_object_distance/30: 4.221194922924042
    2022-07-14 22:03:08,823 does_want_terminate: 1.0
    2022-07-14 22:03:08,823 composite_success: 0.0
    2022-07-14 22:03:08,823 composite_bad_called_terminate: 1.0
    2022-07-14 22:03:08,823 num_steps: 1.0
    2022-07-14 22:03:08,823 did_violate_hold_constraint: 0.0
    2022-07-14 22:03:08,823 move_obj_reward: -0.5103676795959473
    

    Note: this same command will be run to evaluate your agent for the leaderboard. Please submit your docker for remote evaluation (below) only if it runs successfully on your local setup.

Online submission

Follow instructions in the submit tab of the EvalAI challenge page to submit your docker image. Note that you will need a version of EvalAI >= 1.2.3. The challenge consists of the following phases:

  1. Minival phase: This split is used in the local evaluation scripts in this repository. The purpose of this phase/split is sanity checking -- to confirm that our remote evaluation reports the same result as the one you’re seeing locally.

  2. Test Standard phase: The purpose of this phase/split is to serve as the public leaderboard establishing the state of the art; this is what should be used to report results in papers. Each team is allowed maximum of 10 submissions per day for this phase, but again, please use them judiciously. Don’t overfit to the test set.

  3. Test Challenge phase: This phase/split will be used to decide challenge winners. Each team is allowed a total of 5 submissions until the end of challenge submission phase. The highest performing of these 5 will be automatically chosen. Results on this split will not be made public until the announcement of final results at NeurIPS 2022.

Note: Your agent will be evaluated on 1000 episodes and will have a total available time of 48 hours to finish. Your submissions will be evaluated on AWS EC2 p2.xlarge instance which has a Tesla K80 GPU (12 GB Memory), 4 CPU cores, and 61 GB RAM. If you need more time/resources for evaluation of your submission please get in touch. If you face any issues or have questions you can ask them by opening an issue on this repository.

DD-PPO Training Starter Code

In this example, we will train and evaluate an end-to-end policy trained with DD-PPO. You will run all the subsequent steps from the habitat conda environment.

  1. Install Habitat-Lab - Use the rearrange_challenge_2022 branch in our Github repo, which can be cloned using:

    git clone --branch rearrange_challenge_2022 https://github.com/facebookresearch/habitat-lab.git
    

    Install Habitat Lab along with the included RL trainer code by first entering the habitat-lab directory, activating the habitat conda environment from step 1, and then running pip install -r requirements.txt && python setup.py develop --all.

  2. Follow this documentation for how to run DD-PPO in a single or multi-machine setup. See habitat_baselines/ddppo for more information. These commands assume habitat-lab and habitat-challenge are in the same directory. Modify the paths in the arguments if your habitat-challenge directory is located somewhere else.

    1. To run on a single machine use the following script from habitat-lab directory:
      #/bin/bash
      
      export MAGNUM_LOG=quiet
      export HABITAT_SIM_LOG=quiet
      
      set -x
      python habitat_baselines/run.py \
          --exp-config ../habitat-challenge/configs/methods/ddppo_monolithic.yaml \
          --run-type train \
          BASE_TASK_CONFIG_PATH ../habitat-challenge/configs/tasks/rearrange.local.rgbd.yaml \
          TASK_CONFIG.DATASET.SPLIT 'train' \
          TASK_CONFIG.TASK.TASK_SPEC_BASE_PATH ../habitat-challenge/configs/pddl/ \
          TENSORBOARD_DIR ./tb \
          CHECKPOINT_FOLDER ./checkpoints \
          LOG_FILE ./train.log
    2. To run on a cluster with SLURM using distributed training run the following script. While this is not necessary, if you have access to a cluster, it can significantly speed up training. To run on multiple machines in a SLURM cluster run the following script: change #SBATCH --nodes $NUM_OF_MACHINES to the number of machines and #SBATCH --ntasks-per-node $NUM_OF_GPUS and $SBATCH --gres $NUM_OF_GPUS to specify the number of GPUS to use per requested machine.
      #!/bin/bash
      #SBATCH --job-name=ddppo
      #SBATCH --output=logs.ddppo.out
      #SBATCH --error=logs.ddppo.err
      #SBATCH --gres gpu:1
      #SBATCH --nodes 1
      #SBATCH --cpus-per-task 10
      #SBATCH --ntasks-per-node 1
      #SBATCH --mem=60GB
      #SBATCH --time=12:00
      #SBATCH --signal=USR1@600
      #SBATCH --partition=dev
      
      export MAGNUM_LOG=quiet
      export HABITAT_SIM_LOG=quiet
      
      export MAIN_ADDR=$(srun --ntasks=1 hostname 2>&1 | tail -n1)
      
      set -x
      srun python -u -m habitat_baselines.run \
          habitat_baselines/run.py \
          --exp-config ../habitat-challenge/configs/methods/ddppo_monolithic.yaml \
          --run-type train \
          BASE_TASK_CONFIG_PATH ../habitat-challenge/configs/tasks/rearrange.local.rgbd.yaml \
          TASK_CONFIG.DATASET.DATA_PATH ../habitat-challenge/data/datasets/replica_cad/rearrange/v1/{split}/rearrange.json.gz \
          TASK_CONFIG.DATASET.SCENES_DIR ../habitat-challenge/data/replica_cad/ \
          TASK_CONFIG.DATASET.SPLIT 'train' \
          TASK_CONFIG.TASK.TASK_SPEC_BASE_PATH ../habitat-challenge/configs/pddl/ \
          TENSORBOARD_DIR ./tb \
          CHECKPOINT_FOLDER ./checkpoints \
          LOG_FILE ./train.log
  3. More instructions on how to train the DD-PPO model can be found in habitat-lab/habitat_baselines/rl/ddppo. See the corresponding README in habitat-lab for how to adjust the various hyperparameters, save locations, visual encoders and other features.

  4. Evaluate on the minival dataset for the rearrange_easy task from the command line via. First enter the habitat-challenge directory. Ensure, you have the datasets installed in this directory as well. If not, run python -m habitat_sim.utils.datasets_download --uids rearrange_task_assets.

    CHALLENGE_CONFIG_FILE=configs/tasks/rearrange_easy.local.rgbd.yaml python agents/habitat_baselines_agent.py --evaluation local --input-type depth --cfg-path configs/methods/ddppo_monolithic.yaml --model-path data/models/rearrange_easy.pth
  5. We provide Dockerfiles ready to use with the DD-PPO baselines in docker/hab2_monolithic.Dockerfile. For the sake of completeness, we describe how you can make your own Dockerfile below. If you just want to test the baseline code, feel free to skip this bullet because hab2_DDPPO_baseline.Dockerfile is ready to use.

    1. You may want to modify the hab2_DDPPO_baseline.Dockerfile to include torchvision or other libraries. To install torchvision, ifcfg and tensorboard, add the following command to the Docker file:

      RUN /bin/bash -c ". activate habitat; pip install ifcfg torchvision tensorboard"
    2. You change which agent.py is used in the Docker, modify the following lines and replace the agent.py file with your new file:

      ADD agent.py agent.py
    3. Do not forget to add any other files you may need in the Docker, for example, we add the data/models/rearrange_easy.pth file which is the saved weights from the DD-PPO example code.

    4. The scaffold for this code can be found agents/random_agent.py and the code for policies trained with Habitat Baselines can be found in agents/habitat_baselines_agent.py.

  6. Once your Dockerfile and other code is modified to your satisfaction, build it with the following command.

    docker build . --file docker/hab2_monolithic.Dockerfile -t rearrange_submission
  7. To test locally simply run the bash scripts/test_local.sh --docker-name rearrange_submission script. If the docker runs your code without errors, it should work on Eval-AI. The instructions for submitting the Docker to EvalAI are listed above.

Hierarchical RL Starter Code

First, you will need to train individual skill policies with RL. In this example we will approach the rearrange_easy task by training a Pick, Place, and Navigation policy and then plug them into a hard-coded high-level controller.

  1. Follow step 1 of the DD-PPO section to install install Habitat-Lab, and download the datasets.

  2. Steps to train the skills from scratch:

    1. Train the Pick skill. From the Habitat Lab directory, run
    python habitat_baselines/run.py \
        --exp-config habitat_baselines/config/rearrange/ddppo_pick.yaml \
        --run-type train \
        TENSORBOARD_DIR ./pick_tb/ \
        CHECKPOINT_FOLDER ./pick_checkpoints/ \
        LOG_FILE ./pick_train.log
    1. Train the Place skill. Use the exact same command as the above, but replace every instance of "pick" with "place".
    2. Train the Navigation skill. Use the exact same command as the above, but replace every instance of "pick" with "nav_to_obj".
    3. Copy the checkpoints for the different skills to the data/models directory in the Habitat Challenge directory. There should now be three files data/models/[nav,pick,place].pth.
  3. Instead of training the skills, you can also use the provided pre-trained skills. Download the skills via wget https://dl.fbaipublicfiles.com/habitat/data/baselines/v1/rearrange_habitat2022_challenge_baseline_v1.zip && unzip rearrange_habitat2022_challenge_baseline_v1.zip.

  4. Finally, evaluate the combined policies on the minival dataset for the rearrange_easy task from the command line. First enter the habitat-challenge directory. Ensure, you have the datasets installed in this directory as well. If not, run python -m habitat_sim.utils.datasets_download --uids rearrange_task_assets.

    CHALLENGE_CONFIG_FILE=configs/tasks/rearrange_easy.local.rgbd.yaml python agents/habitat_baselines_agent.py --evaluation local --input-type depth --cfg-path configs/methods/tp_srl.yaml

    Using the pre-trained skills from the Google Drive, you should see around a 30% success rate.

  5. Just like with the DD-PPO baseline, we provide a Dockerfile ready to use in docker/tpsrl_monolithic.Dockerfile. See the instructions in the DD-PPO section for how to modify Dockerfile, build it, and test it.

Change Log

  • Sept 7, 2022: Fixed problem with collision threshold and concurrent rendering in the configuration files.

Citing Habitat Rearrangement Challenge 2022

Please cite the challenge and the following paper for details about the 2022 Rearrangement challenge:

@misc{habitatrearrangechallenge2022,
  title         =     Habitat Rearrangement Challenge 2022,
  author        =     {Andrew Szot, Karmesh Yadav, Alex Clegg, Vincent-Pierre Berges, Aaron Gokaslan, Angel Chang, Manolis Savva, Zsolt Kira, Dhruv Batra},
  howpublished  =     {\url{https://aihabitat.org/challenge/2022_rearrange}},
  year          =     {2022}
}
@article{szot2021habitat,
  title={Habitat 2.0: Training home assistants to rearrange their habitat},
  author={Szot, Andrew and Clegg, Alexander and Undersander, Eric and Wijmans, Erik and Zhao, Yili and Turner, John and Maestre, Noah and Mukadam, Mustafa and Chaplot, Devendra Singh and Maksymets, Oleksandr and others},
  journal={Advances in Neural Information Processing Systems},
  volume={34},
  pages={251--266},
  year={2021}
}

Acknowledgments

The Habitat challenge would not have been possible without the infrastructure and support of EvalAI team.

References

[1] Habitat: A Platform for Embodied AI Research. Manolis Savva*, Abhishek Kadian*, Oleksandr Maksymets*, Yili Zhao, Erik Wijmans, Bhavana Jain, Julian Straub, Jia Liu, Vladlen Koltun, Jitendra Malik, Devi Parikh, Dhruv Batra. IEEE/CVF International Conference on Computer Vision (ICCV), 2019.