Skip to content

YoujunZhao/OpenScan

Folders and files

NameName
Last commit message
Last commit date

Latest commit

 

History

65 Commits
 
 
 
 
 
 

Repository files navigation

OpenScan: A Benchmark for Generalized Open-Vocabulary 3D Scene Understanding

Youjun Zhao1, Jiaying Lin1, Shuquan Ye1, Qianshi Pang2, Rynson W.H. Lau1
1City University of Hong Kong, 2South China University of Technology

AAAI 2026

OpenScan is a novel benchmark that facilitates comprehensive evaluation of the generalization ability of 3D scene understanding models on abstract object attributes. It expands the single category of object classes in ScanNet200 into eight linguistic aspects of object-related attributes.



News

  • 8 Nov 2025: OpenScan is accepted by AAAI 2026. 🎉
  • 18 Oct 2024: Release the evaluation code of OpenScan benchmark. 💻
  • 27 Aug 2024: Release the validation set of OpenScan benchmark. 🧩
  • 20 Aug 2024: OpenScan released on arXiv. 📝

Benchmark Installation

If you want to download the OpenScan benchmark data, we provide the raw validation set from OneDrive, and the label mapping file from OneDrive.

You can also download the processed validation set from OneDrive.

Benchmark Format

    {
        "scene_id":          "scene0011_00",           
        "object_id":         "0",                      
        "object_name":       "chair",                  
        "material":          "wood",                  
        "affordance":        "sleep",                  
        "property":          "soft",                  
        "type":              "source of illumination", 
        "manner":            "steered by handlebars",  
        "synonyms":          "bedside table",         
        "requirement":       "water and sun",          
        "element":           "88 keys"                
    },

Evaluation

1. Quick Evaluation on Your Codebase

If your codebase already supports evaluation for the ScanNet or ScanNet200 benchmarks, you can easily adapt it for the OpenScan benchmark by changing the ground truth (GT) labels and label mapping files.

  • Download the processed validation set and the label mapping file for the OpenScan benchmark from Benchmark Installation.

  • Place the processed OpenScan validation set into your GT file directory.

  • Replace your existing label mapping scripts with the OpenScan label mapping file (e.g, replace the SCANNET_LABELS and SCANNET_IDS).

  • Run your evaluation process.

2. Evaluation on Existing 3D Scene Understanding Baselines

If you want to evaluate OpenMask3D, SAI3D, MaskClustering, or Open3DIS on the OpenScan benchmark, you can first clone the repository and then:

Citation 🙏

@article{zhao2024openscan,
  title={OpenScan: A Benchmark for Generalized Open-Vocabulary 3D Scene Understanding},
  author={Zhao, Youjun and Lin, Jiaying and Ye, Shuquan and Pang, Qianshi and Lau, Rynson WH},
  journal={arXiv preprint arXiv:2408.11030},
  year={2024}
}

About

OpenScan: A Benchmark for Generalized Open-Vocabulary 3D Scene Understanding

Resources

Stars

Watchers

Forks

Releases

No releases published

Packages

No packages published