Skip to content

Rubics-Xuan/IVG

Folders and files

NameName
Last commit message
Last commit date

Latest commit

 

History

50 Commits
 
 
 
 
 
 

Repository files navigation

Beyond Literal Descriptions: Understanding and Locating Open-World Objects Aligned with Human Intentions

Wenxuan Wang, Yisi Zhang, Xingjian He, Yichen Yan, Zijia Zhao, Xinlong Wang, Jing Liu

Paper PDF Project Page

🚩 Updates

Welcome to this repository for the latest updates.

[2024.2.17] : Released our paper on arXiv.

[ ] : Released our data and baselines.

🌕 Abstract

In this work, we take a step further to the intention-driven visual-language (V-L) understanding. To promote classic VG towards human intention interpretation, we propose a new intention-driven visual grounding (IVG) task and build a largest-scale IVG dataset named IntentionVG with free-form intention expressions. Considering that practical agents need to move and find specific targets among various scenarios to realize the grounding task, our IVG task and IntentionVG dataset have taken the crucial properties of both multi-scenario perception and egocentric view into consideration. Besides, various types of models are set up as the baselines to realize our IVG task. Extensive experiments on our IntentionVG dataset and baselines demonstrate the necessity and efficacy of our method for the V-L field. To foster future research in this direction, our newly built dataset and baselines will be publicly available.


🌖 Intention-Driven Visual Grounding (IVG) Task

🌗 Data Collection Engine & IntentionVG Dataset

🌘 Baseline Constructions

🌑 Results

🚀 Citation

If you use our data or baseline model in your work or find it is helpful, please cite the corresponding paper:

@article{wang2024beyond,
  title={Beyond Literal Descriptions: Understanding and Locating Open-World Objects Aligned with Human Intentions},
  author={Wang, Wenxuan and Zhang, Yisi and He, Xingjian and Yan, Yichen and Zhao, Zijia and Wang, Xinlong and Liu, Jing},
  journal={arXiv preprint arXiv:2402.11265},
  year={2024}
}

🏭 Acknowledgement

This work is built on many excellent research works and open-source projects, thanks a lot to all the authors for sharing!

1.EVA-CLIP

2.Qwen-VL

3.MiniGPT-4

About

This repo holds the official code and data for "Beyond Literal Descriptions: Understanding and Locating Open-World Objects Aligned with Human Intentions".

Resources

License

Stars

Watchers

Forks

Releases

No releases published

Packages

No packages published