Previously, we call our preliminary study TIT: Transformer in Transformer as Backbone for Deep Reinforcement Learning, and you can see the code from this link.
Please cite our paper as:
@inproceedings{mao2024PDiT,
title={PDiT: Interleaving Perception and Decision-making Transformers for Deep Reinforcement Learning},
author={Mao, Hangyu and Zhao, Rui and Li, Ziyue and Xu, Zhiwei and Chen, Hao and Chen, Yiqun and Zhang, Bin and Xiao, Zhen and Zhang, Junge and Yin, Jiangjin},
booktitle={Proceedings of the 23rd International Conference on Autonomous Agents and MultiAgent Systems},
year={2024}
}
and cite the preliminary study as:
@article{mao2022transformer,
title={Transformer in Transformer as Backbone for Deep Reinforcement Learning},
author={Mao, Hangyu and Zhao, Rui and Chen, Hao and Hao, Jianye and Chen, Yiqun and Li, Dong and Zhang, Junge and Xiao, Zhen},
journal={arXiv preprint arXiv:2212.14538},
year={2022}
}