Skip to content

[ECCV'24] OpenIns3D: Snap and Lookup for 3D Open-vocabulary Instance Segmentation

License

Notifications You must be signed in to change notification settings

Pointcept/OpenIns3D

Repository files navigation

OpenIns3D: Snap and Lookup for 3D Open-vocabulary Instance Segmentation

Zhening Huang Β· Xiaoyang Wu Β· Xi Chen Β· Hengshuang Zhao Β· Lei Zhu Β· Joan Lasenby

PWC PWC PWC PWC

TL;DR: OpenIns3D proposes a "mask-snap-lookup" scheme to achieve 2D-input-free 3D open-world scene understanding, which attains SOTA performance across datasets, even with fewer input prerequisites. πŸš€βœ¨

device to watch BBC news furniture that is capable of producing music Ma Long's domain of excellence
most comfortable area to sit in the room penciling down ideas during brainstorming furniture offers recreational enjoyment with friends

Highlights

  • 2 Aug, 2024: Major update πŸ”₯: We have released optimized and easy-to-use code for OpenIns3D to reproduce all the results in the paper and demo.
  • 1 Jul, 2024: OpenIns3D has been accepted at ECCV 2024 πŸŽ‰. We will release more code on various experiments soon.
  • 6 Jan, 2024: We have released a major revision, incorporating S3DIS and ScanNet benchmark code. Try out the latest version.
  • 31 Dec, 2023 We release the batch inference code on ScanNet.
  • 31 Dec, 2023 We release the zero-shot inference code, test it on your own data!
  • Sep, 2023: OpenIns3D is released on arXiv, alongside with explanatory video, project page. We will release the code at end of this year.

Overview

Installation

Please check the installation file to install OpenIns3D for:

  1. reproducing all results in the paper,
  2. testing on your own dataset

Reproducing Results

πŸ—‚οΈ Replica

πŸ”§ Data Preparation:

  1. Execute the following command to set up the Replica dataset, including scene .ply files, predicted masks, and ground truth:
sh scripts/prepare_replica.sh

πŸ“Š Open Vocabulary Instance Segmentation:

python openins3d/main.py --dataset replica --task OVIS --detector yoloworld

πŸ“ˆ Results Log:

Task AP AP50 AP25 Log
Replica OVIS (in paper) 13.6 18.0 19.7
Replica OVIS (this Code) 15.4 19.5 25.2 log

πŸ—‚οΈ ScanNet

πŸ”§ Data Preparation:

  1. Make sure you have completed the form on ScanNet to obtain access.
  2. Place the download-scannet.py script into the scripts directory.
  3. Run the following command to download all _vh_clean_2.ply files for validation sets, as well as instance ground truth, GT-masks, and detected masks:
sh scripts/prepare_scannet.sh

πŸ“Š Open Vocabulary Object Recognition:

python openins3d/main.py --dataset scannet --task OVOR --detector odise

πŸ“ˆ Results Log:

Task Top-1 Accuracy Log
ScanNet_OVOR (in paper) 60.4
ScanNet_OVOR (this Code) 64.2 log

πŸ“Š Open Vocabulary Object Detection:

python openins3d/main.py --dataset scannet --task OVOD --detector odise

πŸ“Š Open Vocabulary Instance Segmentation:

python openins3d/main.py --dataset scannet --task OVIS --detector odise

πŸ“ˆ Results Log:

Task AP AP50 AP25 Log
ScanNet_OVOD (in paper) 17.8 28.3 36.0
ScanNet_OVOD (this Code) 20.7 29.9 39.7 log
ScanNet_OVIS (in paper) 19.9 28.7 38.9
ScanNet_OVIS (this Code) 23.3 34.6 42.6 log

πŸ—‚οΈ S3DIS

πŸ”§ Data Preparation:

  1. Make sure you have completed the form on S3DIS to obtain access.
  2. Then, run the following command to acquire scene .ply files, predicted masks, and ground truth:
sh scripts/prepare_s3dis.sh

πŸ“Š Open Vocabulary Instance Segmentation:

python openins3d/main.py --dataset s3dis --task OVIS --detector odise

πŸ“ˆ Results Log:

Task AP AP50 AP25 Log
S3DIS OVIS (in paper) 21.1 28.3 29.5
S3DIS OVIS (this Code) 22.9 29.0 31.4 log

πŸ—‚οΈ STPLS3D

πŸ”§ Data Preparation:

  1. Make sure you have completed the form STPLS3D to gain access.
  2. Then, run the following command to obtain scene .ply files, predicted masks, and ground truth:
sh scripts/prepare_stpls3d.sh

πŸ“Š Open Vocabulary Instance Segmentation:

python openins3d/main.py --dataset stpls3d --task OVIS --detector odise

πŸ“ˆ Results Log:

Task AP AP50 AP25 Log
STPLS3D OVIS (in paper) 11.4 14.2 17.2
STPLS3D OVIS (this Code) 15.3 17.3 17.4 log

Replacing Snap with RGBD

We also evaluate the performance of OpenIns3D when the Snap module is replaced with original RGBD images while keeping the other design intact.

πŸ—‚οΈ Replica

πŸ”§ Data Preparation

  1. Download the Replica dataset and RGBD images:
sh scripts/prepare_replica.sh
sh scripts/prepare_replica2d.sh

πŸ“Š Open Vocabulary Instance Segmentation

python openins3d/main.py --dataset replica --task OVIS --detector yoloworld --use_2d true
python openins3d/main.py --dataset replica --task OVIS --detector yoloworld --use_2d true
python openins3d/main.py --dataset scannet200 --task OVIS --detector yoloworld --use_2d true

πŸ“ˆ Results Log

Task AP AP50 AP25 Log
OpenMask3D 13.1 18.4 24.2
Open3DIS 18.5 24.5 28.2
OpenIns3D 21.1 26.2 30.6 log

Zero-Shot Inference with Single Vocabulary

We demonstrate how to perform single-vocabulary instance segmentation similar to the teaser image in the paper. The key new feature is the introduction of a CLIP ranking and filtering module to reduce false-positive results. (Works best with RGBD but is also fine with SNAP.)

Quick Start:

  1. πŸ“₯ Download the demo dataset by running:

    sh scripts/prepare_demo_single.sh 
  2. πŸš€ Run the model by executing:

    python zero_shot_single_voc.py

You can now view results like teaser images in 2D or 3D.


Zero-Shot Inference with Multiple Vocabulary

ℹ️ Note: Ensure you have installed the mask module according to the installation guide, as it is not required for reproducing results.

To perform zero-shot scene understanding:

  1. πŸ“₯ Download the scannet200_val.ckpt checkpoint from this link and place it in the third_party/ directory.

  2. πŸš€ Run the model by executing python zero_shot.py and specify:

    • πŸ—‚οΈ pcd_path: The path to the colored point cloud file.
    • πŸ“ vocab: A list of vocabulary terms to search for.

You can also use the following script to automatically set up the scannet200_val.ckpt checkpoint and download some sample 3D scans:

sh scripts/prepare_zero_shot.sh

πŸš€ Running a Zero-Shot Inference

To perform zero-shot inference using the sample dataset (default with Replica vocabulary), run:

python zero_shot_multi_vocs.py --pcd_path data/demo_scenes/demo_scene_1.ply

πŸ“‚ Results are saved under output/snap_demo/demo_scene_1_vis/image.

To use a different 2D detector (πŸ” ODISE works better on pcd-rendered images):

python zero_shot_multi_vocs.py --pcd_path data/demo_scenes/demo_scene_2.ply --detector yoloworld

πŸ“ Custom Vocabulary: If you want to specify your own vocabulary list, add it with the --vocab flag as follows:

python zero_shot_multi_vocs.py \
--pcd_path 'data/demo_scenes/demo_scene_4.ply' \
--vocab "drawers" "lower table"

Citation

If you find OpenIns3D and this codebase useful for your research, please cite our work as a form of encouragement. 😊

@article{huang2024openins3d,
      title={OpenIns3D: Snap and Lookup for 3D Open-vocabulary Instance Segmentation}, 
      author={Zhening Huang and Xiaoyang Wu and Xi Chen and Hengshuang Zhao and Lei Zhu and Joan Lasenby},
      journal={European Conference on Computer Vision},
      year={2024}
    }

Acknowledgement

The mask proposal model is modified from Mask3D, and we heavily used the easy setup version of it for MPM. Thanks again for the great work! πŸ™Œ We also drew inspiration from LAR and ContrastiveSceneContexts when developing the code. πŸš€