Skip to content

nuoxu/AKGVP

Repository files navigation

AKGVP

Aligning Knowledge Graph with Visual Perception for Object-goal Navigation (ICRA 2024)

supplementary.material.mp4

Setup

  • Clone the repository and move into the top-level directory cd AKGVP
  • Create conda environment. conda env create -f environment.yml
  • Activate the environment. conda activate akgvp
  • Our settings of dataset follows previous works, please refer to HOZ and L-sTDE for AI2THOR.
  • After placing the dataset, use CLIP to generate image features. python create_image_feat.py
  • For zero-shot navigation, lines 70-73 in runners/a3c_train.py can be enabled. In this way, certain categories will be filtered during the training.

Training and Evaluation

Train the AKGVP model

python main.py \
      --title AKGVPModel \
      --model AKGVPModel \
      --workers 4 \
      --gpu-ids 0 \
      --images-file-name clip_featuremap.hdf5

Evaluate the AKGVP model

python full_eval.py \
        --title AKGVPModel \
        --model AKGVPModel \
        --results-json AKGVPModel.json \
        --gpu-ids 0 \
        --images-file-name clip_featuremap.hdf5 \
        --save-model-dir trained_models

Visualization

python visualization.py

About

Aligning Knowledge Graph with Visual Perception for Object-goal Navigation (ICRA 2024)

Resources

License

Stars

Watchers

Forks

Releases

No releases published

Packages

No packages published

Languages