📑 Paper | 🌐 Homepage | 🤗 Dataset | 🤖 Model | 🤗 DataViewer
Mobile agents powered by vision-language models have demonstrated impressive capabilities in automating mobile tasks, with recent leading models achieving a marked performance leap, e.g., nearly 70% success on AndroidWorld. However, these systems keep their training data closed and remain opaque about their task and trajectory synthesis recipes. We present OpenMobile, an open-source framework that synthesizes high-quality task instructions and agent trajectories, with two key components: (1) The first is a scalable task synthesis pipeline that constructs a global environment memory from exploration, then leverages it to generate diverse and grounded instructions.and (2) a policy-switching strategy for trajectory rollout. By alternating between learner and expert models, it captures essential error-recovery data often missing in standard imitation learning. Agents trained on our data achieve competitive results across three dynamic mobile agent benchmarks: notably, our fine-tuned Qwen2.5-VL and Qwen3-VL reach 51.7% and 64.7% on AndroidWorld, far surpassing existing open-data approaches. Furthermore, we conduct transparent analyses on the overlap between our synthetic instructions and benchmark test sets, and verify that performance gains stem from broad functionality coverage rather than benchmark overfitting.
Release Plans:
- OpenMobile trajectoy data
- Fine-tuned checkpoints based on OpenMobile data
- AndroidWorld evaluation code
- Task and trajectory synthesis code
- Other code and resources
- Project Structure
- Environment Setup
- Evaluation
- Trajectory Synthesis
- Training
- Acknowledgements
- License
- Citation
The repository is organized into two main components.
AndroidWorld/contains the execution-side code, including environment exploration, trajectory rollout, trajectory post-processing, and model evaluation on AndroidWorld.task_synthesis/contains the task-synthesis pipeline: it takes processed exploration results, builds screen-level context and environment memory, synthesizes the final high-level instructions.
We recommend using a single conda environment for the full OpenMobile pipeline, including both AndroidWorld/ and task_synthesis/. The detailed setup instructions are documented in AndroidWorld/environment.md.
Evaluation on AndroidWorld can be run with the following steps.
-
Deploy the target model with vLLM (for example,
OpenMobile-8B) and obtainmodel_base_urlandmodel_name. -
Start the AndroidWorld emulator / ADB environment:
EMULATOR_NAME=AndroidWorldAvd
~/Library/Android/sdk/emulator/emulator -avd $EMULATOR_NAME -port 5554 -no-snapshot -grpc 8554For more details about the AndroidWorld environment setup, please also refer to the official AndroidWorld repository.
- Launch evaluation:
cd AndroidWorld
python run.py \
--agent_name qwen3vl \
--console_port 5554 \
--grpc_port 8554 \
--perform_emulator_setup=true \
--model_base_url your_vllm_url \
--model_name OpenMobile-8B \
--model_api_key EMPTY \
--checkpoint_dir runs/openmobile_8b_seed30 \
--task_random_seed 30Coming soon.
Coming soon.
Thanks to the following open-sourced projects:
AndroidWorld AndroidLab MobileWorld ScaleCUA OS-Genesis Qwen-VL LlamaFactory
This project is licensed under the Apache 2.0 License. Other released artifacts, third-party models, datasets, and derived resources may be subject to their own respective licenses and usage terms.
If you find this project useful, please consider citing:
@article{cheng2026openmobile,
title={OpenMobile: Building Open Mobile Agents with Task and Trajectory Synthesis},
author={Cheng, Kanzhi and Li, Zehao and Ma, Zheng and Chen, Nuo and Cao, Jialin and Sun, Qiushi and Ding, Zichen and Xu, Fangzhi and Yan, Hang and Chen, Jiajun and others},
journal={arXiv preprint arXiv:2604.15093},
year={2026}
}