Code of the CVPR 2021 Oral paper: A Recurrent Vision-and-Language BERT for Navigation
-
Updated
Aug 13, 2022 - Python
Code of the CVPR 2021 Oral paper: A Recurrent Vision-and-Language BERT for Navigation
Code and Data of the CVPR 2022 paper: Bridging the Gap Between Learning in Discrete and Continuous Environments for Vision-and-Language Navigation
Pytorch code for ICRA'21 paper: "Hierarchical Cross-Modal Agent for Robotics Vision-and-Language Navigation"
Code of the NeurIPS 2021 paper: Language and Visual Entity Relationship Graph for Agent Navigation
Code and data of the Fine-Grained R2R Dataset proposed in the EMNLP 2021 paper Sub-Instruction Aware Vision-and-Language Navigation
Planning as In-Painting: A Diffusion-Based Embodied Task Planning Framework for Environments under Uncertainty
Code for ORAR Agent for Vision and Language Navigation on Touchdown and map2seq
Repository for Vision-and-Language Navigation via Causal Learning (Accepted by CVPR 2024)
Official implementation of the NAACL 2024 paper "Navigation as Attackers Wish? Towards Building Robust Embodied Agents under Federated Learning"
Add a description, image, and links to the vision-and-language-navigation topic page so that developers can more easily learn about it.
To associate your repository with the vision-and-language-navigation topic, visit your repo's landing page and select "manage topics."