Skip to content

Robbyant/lingbot-va

Repository files navigation

LingBot-VA: Causal World Modeling for Robot Control

teaser_video.mp4

💫 Meet LingBot-VA! We've built an AR diffusion framework for simultaneous world modeling and action! 🤖✨

LingBot-VA has focused on:

  • Autoregressive Video-Action World Modeling: Architecturally unifies visual dynamics prediction and action inference within a single interleaved sequence while maintaining their conceptual distinction.
  • High-efficiency Execution: A dual-stream mixture-of-transformers(MoT) architecture with Asynchronous Execution and KV Cache.
  • Long-Horizon Performance and Generalization: High improvements in sample efficiency, long-horizon success rates, and generalization to novel scenes.

🚀 News

  • [2026-01-29] Weights and code for shared backbone released! Please stay tuned for our separated version!

📦 Model Download

  • Pretrained Checkpoints for Post-Training
Model Name Huggingface Repository ModelScope Repository Description
lingbot-va-base   🤗 robbyant/lingbot-va-base   🤖 Robbyant/lingbot-va-base   LingBot-VA w/ shared backbone
lingbot-va-posttrain-robotwin   🤗 robbyant/lingbot-va-posttrain-robotwin   🤖 Robbyant/lingbot-va-posttrain-robotwin   LingBot-VA-Posttrain-Robotwin w/ shared backbone

🛠️ Quick Start

Installation

Requirements • Python ≥ 3.10.16 • Pytorch == 2.9.0 • CUDA 12.6

pip install torch==2.9.0 torchvision==0.24.0 torchaudio==2.9.0 --index-url https://download.pytorch.org/whl/cu126
pip install websockets einops diffusers==0.36.0 transformers==5.0.0 accelerate msgpack opencv-python matplotlib ftfy easydict
pip install flash-attn --no-build-isolation

Deploying LingBot-VA for Inference

LingBot-VA supports both standalone execution and Server-Client architecture which separates the model environment from simulation. By isolating dependencies, the design avoids package clashes and supports distributed inference on GPUs, clusters, and other devices.

Evaluation on RoboTwin-2.0

Preparing the Environment

You can follow the official instructions from the original RoboTwin-2.0 repository:
https://robotwin-platform.github.io/doc/usage/robotwin-install.html

Deploying the Inference Server

# single GPU
bash evaluation/robotwin/launch_server.sh

# multi-GPU
bash evaluation/robotwin/launch_server_multigpus.sh

Executing the Inference Client

# single GPU
task_name="adjust_bottle";
save_root="results/";
bash evaluation/robotwin/launch_client.sh ${task_name} ${save_root}

# multi-GPU
save_root="results/"
task_group_id=0;
bash evaluation/robotwin/launch_client_multigpus.sh ${save_root} ${task_group_id}

Related experiments results will be save in /path/to/your/RoboTwin/${save_root}. Please note that an eval_result folder is also generated. This is a native output from RoboTwin and is identical to the contents in the results folder; it can be safely ignored. It is important to note that the inference server and client must be deployed on the same machine. For launching multi-GPU client, we padded the original 50 tasks to 56 via duplication and partitioned them into 7 groups to align with the 8-GPU configuration of our inference node. You can specify the task_group_id (0-6) to select a particular group for inference. For detailed grouping configurations, please refer to evaluation/robotwin/launch_client_multigpus.sh.

Run Image to Video-Action Generation

We also provide a script for image to video-action generation:

CONFIG_NAME='robotwin_i2av' bash script/run_launch_va_server_sync.sh 

📊 Performance

We evaluate our model on both simulation benchmarks and real-world scenarios, and achieve state-of-the-art performance.

Simulation Evaluation

  • RoboTwin 2.0
Method (Average 50 Tasks) Easy SR (%) Hard SR (%)
X-VLA 72.9 72.8
π0 65.9 58.4
π0.5 82.7 76.8
Motus 88.7 87.0
LingBot-VA (Ours) 92.9 (+4.2) 91.6 (+4.6)
  • LIBERO
Methods Spatial Object Goal Long Avg
π0 96.898.895.885.294.1
π0.5 98.898.298.092.496.9
OpenVLA 84.788.479.253.776.5
X-VLA 98.298.697.897.698.1
LingBot-VA (Ours) 98.5 ± 0.3 99.6 ± 0.3 97.2 ± 0.2 98.5 ± 0.5 98.5

 

Real-world Deployment

Six manipulation tasks across three categories: longhorizon tasks (Make Breakfast, Pick Screws), precision tasks (Insert Tube, Unpack Delivery), and deformable & articulated object manipulation (Fold Clothes, Fold Pants). Our method achieves state-of-the-art performance on both metrics (Progress Rate and Success Rate), substantially outperforming strong baseline π0.5

Progress Score (PS): The average score across all trials divided by the maximum possible score, expressed as a percentage:
PS =
Average Progress Max Steps
× 100%.
Success Rate (SR): The number of successful trials divided by the total number of trials, expressed as a percentage:
SR =
Successful Trials N
× 100%.

* All metrics are reported in percentage (%). Higher values are bolded.

Task Make Breakfast Pick Screws Insert Tube Unpack Delivery Fold Clothes Fold Pants
PS SR PS SR PS SR PS SR PS SR PS SR
π0.5 73.070.0 74.050.0 79.230.0 73.025.0 62.930.0 30.030.0
LingBot-VA (Ours) 97.075.0 82.570.0 85.840.0 84.565.0 48.835.0 76.770.0

🪪 License

This project is released under the Apache License 2.0. See LICENSE file for details.

📚Citation

@article{lingbot-va2026,
  title={Causal World Modeling for Robot Control},
  author={Li, Lin and Zhang, Qihang and Luo, Yiming and Yang, Shuai and Wang, Ruilin and Han, Fei and Yu, Mingrui and Gao, Zelin and Xue, Nan and Zhu, Xing and Shen, Yujun and Xu, Yinghao},
  journal={arXiv preprint arXiv:[xxxx]},
  year={2026}
}

🧩 Acknowledgments

This work builds upon several excellent open-source projects:

  • Wan-Video - Vision transformer backbone
  • MoT - Mixture-of-Transformers architecture
  • The broader open-source computer vision and robotics communities

For questions, discussions, or collaborations:

About

No description, website, or topics provided.

Resources

License

Stars

Watchers

Forks

Releases

No releases published

Packages

No packages published

Contributors 2

  •  
  •