teaser_video.mp4
💫 Meet LingBot-VA! We've built an AR diffusion framework for simultaneous world modeling and action! 🤖✨
LingBot-VA has focused on:
- Autoregressive Video-Action World Modeling: Architecturally unifies visual dynamics prediction and action inference within a single interleaved sequence while maintaining their conceptual distinction.
- High-efficiency Execution: A dual-stream mixture-of-transformers(MoT) architecture with Asynchronous Execution and KV Cache.
- Long-Horizon Performance and Generalization: High improvements in sample efficiency, long-horizon success rates, and generalization to novel scenes.
- [2026-01-29] Weights and code for shared backbone released! Please stay tuned for our separated version!
- Pretrained Checkpoints for Post-Training
| Model Name | Huggingface Repository | ModelScope Repository | Description |
|---|---|---|---|
| lingbot-va-base | 🤗 robbyant/lingbot-va-base | 🤖 Robbyant/lingbot-va-base | LingBot-VA w/ shared backbone |
| lingbot-va-posttrain-robotwin | 🤗 robbyant/lingbot-va-posttrain-robotwin | 🤖 Robbyant/lingbot-va-posttrain-robotwin | LingBot-VA-Posttrain-Robotwin w/ shared backbone |
Requirements • Python ≥ 3.10.16 • Pytorch == 2.9.0 • CUDA 12.6
pip install torch==2.9.0 torchvision==0.24.0 torchaudio==2.9.0 --index-url https://download.pytorch.org/whl/cu126
pip install websockets einops diffusers==0.36.0 transformers==5.0.0 accelerate msgpack opencv-python matplotlib ftfy easydict
pip install flash-attn --no-build-isolationLingBot-VA supports both standalone execution and Server-Client architecture which separates the model environment from simulation. By isolating dependencies, the design avoids package clashes and supports distributed inference on GPUs, clusters, and other devices.
Preparing the Environment
You can follow the official instructions from the original RoboTwin-2.0 repository:
https://robotwin-platform.github.io/doc/usage/robotwin-install.html
Deploying the Inference Server
# single GPU
bash evaluation/robotwin/launch_server.sh
# multi-GPU
bash evaluation/robotwin/launch_server_multigpus.shExecuting the Inference Client
# single GPU
task_name="adjust_bottle";
save_root="results/";
bash evaluation/robotwin/launch_client.sh ${task_name} ${save_root}
# multi-GPU
save_root="results/"
task_group_id=0;
bash evaluation/robotwin/launch_client_multigpus.sh ${save_root} ${task_group_id}Related experiments results will be save in /path/to/your/RoboTwin/${save_root}. Please note that an eval_result folder is also generated. This is a native output from RoboTwin and is identical to the contents in the results folder; it can be safely ignored.
It is important to note that the inference server and client must be deployed on the same machine. For launching multi-GPU client, we padded the original 50 tasks to 56 via duplication and partitioned them into 7 groups to align with the 8-GPU configuration of our inference node. You can specify the task_group_id (0-6) to select a particular group for inference. For detailed grouping configurations, please refer to evaluation/robotwin/launch_client_multigpus.sh.
We also provide a script for image to video-action generation:
CONFIG_NAME='robotwin_i2av' bash script/run_launch_va_server_sync.sh We evaluate our model on both simulation benchmarks and real-world scenarios, and achieve state-of-the-art performance.
- RoboTwin 2.0
| Method (Average 50 Tasks) | Easy SR (%) | Hard SR (%) |
|---|---|---|
| X-VLA | 72.9 | 72.8 |
| π0 | 65.9 | 58.4 |
| π0.5 | 82.7 | 76.8 |
| Motus | 88.7 | 87.0 |
| LingBot-VA (Ours) | 92.9 (+4.2) | 91.6 (+4.6) |
- LIBERO
| Methods | Spatial | Object | Goal | Long | Avg |
|---|---|---|---|---|---|
| π0 | 96.8 | 98.8 | 95.8 | 85.2 | 94.1 |
| π0.5 | 98.8 | 98.2 | 98.0 | 92.4 | 96.9 |
| OpenVLA | 84.7 | 88.4 | 79.2 | 53.7 | 76.5 |
| X-VLA | 98.2 | 98.6 | 97.8 | 97.6 | 98.1 |
| LingBot-VA (Ours) | 98.5 ± 0.3 | 99.6 ± 0.3 | 97.2 ± 0.2 | 98.5 ± 0.5 | 98.5 |
Six manipulation tasks across three categories: longhorizon tasks (Make Breakfast, Pick Screws), precision tasks (Insert Tube, Unpack Delivery), and deformable & articulated object manipulation (Fold Clothes, Fold Pants). Our method achieves state-of-the-art performance on both metrics (Progress Rate and Success Rate), substantially outperforming strong baseline π0.5
* All metrics are reported in percentage (%). Higher values are bolded.
| Task | Make Breakfast | Pick Screws | Insert Tube | Unpack Delivery | Fold Clothes | Fold Pants | ||||||
|---|---|---|---|---|---|---|---|---|---|---|---|---|
| PS | SR | PS | SR | PS | SR | PS | SR | PS | SR | PS | SR | |
| π0.5 | 73.0 | 70.0 | 74.0 | 50.0 | 79.2 | 30.0 | 73.0 | 25.0 | 62.9 | 30.0 | 30.0 | 30.0 |
| LingBot-VA (Ours) | 97.0 | 75.0 | 82.5 | 70.0 | 85.8 | 40.0 | 84.5 | 65.0 | 48.8 | 35.0 | 76.7 | 70.0 |
This project is released under the Apache License 2.0. See LICENSE file for details.
@article{lingbot-va2026,
title={Causal World Modeling for Robot Control},
author={Li, Lin and Zhang, Qihang and Luo, Yiming and Yang, Shuai and Wang, Ruilin and Han, Fei and Yu, Mingrui and Gao, Zelin and Xue, Nan and Zhu, Xing and Shen, Yujun and Xu, Yinghao},
journal={arXiv preprint arXiv:[xxxx]},
year={2026}
}This work builds upon several excellent open-source projects:
- Wan-Video - Vision transformer backbone
- MoT - Mixture-of-Transformers architecture
- The broader open-source computer vision and robotics communities
For questions, discussions, or collaborations:
- Issues: Open an issue on GitHub
- Email: Contact Dr. Qihang Zhang (liuhuan.zqh@antgroup.com) or Dr. Lin Li (fengchang.ll@antgroup.com)
