Skip to content

ControlNet/NAVER

Repository files navigation

NAVER: A Neuro-Symbolic Compositional Automaton for Visual Grounding with Explicit Logic Reasoning

This repo is the official implementation for the paper NAVER: A Neuro-Symbolic Compositional Automaton for Visual Grounding with Explicit Logic Reasoning in ICCV 2025.

Release

  • [2025/06/28] 🔥 NAVER code is open sourced in GitHub.
  • [2025/06/25] 🎉 NAVER paper is accepted by ICCV 2025.

TODOs

We're working on the following TODOs:

  • GUI demo.
  • Support more LLMs.
  • Video demo & slides presentation.

Installation

Requirements

  • Python >= 3.10
  • conda

Please follow the instructions below to install the required packages and set up the environment.

1. Clone this repository.

git clone https://github.com/ControlNet/NAVER

2. Setup conda environment and install dependencies.

Option 1: Using pixi (recommended):

pixi install
pixi shell

Option 2: Building from source (You may need to setup the CUDA and PyTorch manually):

conda install conda-forge/label/rust_dev::rust=1.78 -c conda-forge -y
pip install "git+https://github.com/scallop-lang/scallop.git@f8fac18#egg=scallopy&subdirectory=etc/scallopy"
pip install -e .

3. Configure the environments

Edit the file .env or setup in CLI to configure the environment variables.

OPENAI_API_KEY=your-api-key  # if you want to use OpenAI LLMs
AZURE_OPENAI_URL= # if you want to use Azure OpenAI LLMs
OLLAMA_HOST=http://ollama.server:11434  # if you want to use your OLLaMA server for llama or deepseek
# do not change this TORCH_HOME variable
TORCH_HOME=./pretrained_models

4. Download the pretrained models

Run the scripts to download the pretrained models to the ./pretrained_models directory.

python -m hydra_vl4ai.download_model --base_config config/refcoco.yaml --model_config config/model_config.yaml --extra_packages naver.tool

Inference

You may need 28GB vRAM to run NAVER. Consider editing the file in ./config/model_config.yaml to load the models in multiple GPUs.

Inference with given one image and query

python demo_cli.py \
  --image <IMAGE_PATH> \
  --query <QUERY> \
  --base_config <YOUR-CONFIG-DIR> \
  --model_config <MODEL-CONFIG-PATH>

The result will be printed in the console.

Inference dataset

python main.py \
  --data_root <YOUR-DATA-ROOT> \
  --base_config <YOUR-CONFIG-DIR> \
  --model_config <MODEL-CONFIG-PATH>

Then the inference results are saved in the ./result directory for evaluation.

Evaluation

python evaluate.py --input <RESULT_JSONL_PATH>

The evaluation results will be printed in the console. Note the output from LLM is random, so the evaluation results may be slightly different from the paper.

Citation

If you find this work useful for your research, please consider citing it.

@article{cai2025naver,
  title = {NAVER: A Neuro-Symbolic Compositional Automaton for Visual Grounding with Explicit Logic Reasoning},
  author = {Cai, Zhixi and Ke, Fucai and Jahangard, Simindokht and Garcia de la Banda, Maria and Haffari, Reza and Stuckey, Peter J. and Rezatofighi, Hamid},
  journal = {arXiv preprint arXiv:2502.00372},
  year = {2025},
}

About

[ICCV] NAVER: A Neuro-Symbolic Compositional Automaton for Visual Grounding with Explicit Logic Reasoning

Resources

License

Stars

Watchers

Forks

Releases

No releases published

Packages

No packages published

Languages