Wenbo Hu, Xin Chen, Yan Gao-Tian, Yihe Deng, Nanyun Peng, Kai-Wei Chang
📑 Paper | 📖 arXiv | 🌐 Homepage | 🤗 Model (Coming)
We present OpenVLThinkerV2, a robust, general-purpose multimodal model. understanding tasks. Our model is trained with G2RPO, a novel RL training objective that replaces linear scaling with non- linear distributional matching. By enforcing a Gaussian topology, G2RPO provides 1) intrinsic robustness to outliers, 2) symmetric updates for positive and negative rewards, and 3) uniform variance across diverse tasks. We further introduce task-level response length and entropy shaping mechanisms to balance perception and multi-step reasoning. These dynamic bounds encourages early response length convergence and effectively preventing both entropy collapse and explosion.Our model obtains significant performance gains after training on the baseline Qwen3-VL-Instruct-8B across diverse visual tasks. For instance, OpenVLThinkerV2 achieves
- [Coming!] 📝 We will release the checkpoint of OpenVLThinkerV2 after the model trained with Cold-Start SFT. Our current results can be achieved by directly RL from Qwen3-VL-8B. Stay tuned for our stronger version!
- [2026-04-10] 🔥 We release the example training and validation data in the data folder.
- [2026-04-10] 🔥 We release the training and evaluation code.
- [2026-04-10] 🔥 We release the paper of OpenVLThinkerV2.
git clone https://github.com/uclanlp/OpenVLThinker.git
cd OpenVLThinker
conda create -n easyr1 python=3.11
conda activate easyr1
cd EasyR1
pip install -e .For more details for the RL environment installation, please refer to EasyR1.
bash ./EasyR1/local_scripts/run_g2rpo_rl_slurm.shWe provide example training and validation sample data here. The original images in training data can be found in this work.
Furthermore, our training process supports multi-task validation with separate scores for each task. To add more validation dataset for various tasks, please add them here and update your task keys in this file.
Since OpenVLThinkerV2 shares the same architecture as Qwen3-VL-8B, it naturally supports easy and efficient inference.
We adopt VLMEvalKit for most of our evaluation. For grounding task, we follow evaluation scripts in OneThinker. Please follow them for specific evaluation setups.
Please refer to the core_algos.py
@register_adv_estimator(AdvantageEstimator.GS_GRPO)
def compute_pertask_gaussian_outcome_advantage_grpoWe also support our Gaussian Advantage Normalization method in GDPO, please see Gaussian (GS) GDPO:
@register_adv_estimator(AdvantageEstimator.GS_GDPO)
def compute_pertask_gaussian_outcome_advantage_gdpoThese can be changed at the config file.
If you find our work helpful for your research, please consider citing our work.
@article{hu2026openvlthinkerv2generalistmultimodalreasoning,
title={OpenVLThinkerV2: A Generalist Multimodal Reasoning Model for Multi-domain Visual Tasks},
author={Wenbo Hu and Xin Chen and Yan Gao-Tian and Yihe Deng and Nanyun Peng and Kai-Wei Chang},
year={2026},
journal={arXiv preprint arXiv:2604.08539},
url={https://arxiv.org/abs/2604.08539},
}
OpenVLThinkerV2 is licensed under the Apache 2.0.
We sincerely appreciate the contributions of the open-source community. The related projects are as follows: EasyR1, verl, VLMEvalKit, OneThinker.


