This is repository with dataset and dataset generation code for V2A - Vision to Action: Learning robotic arm actions based on vision and language.
To download a dataset please use the followink link.
The link provides a .zip
file with V2A instructions for the corresponding splits of SHOP-VRB dataset. Files with _GT
in the name contatin additional filed with suggested ground truth sequence of primitive actions for the given instruction.
Code for dataset generation will appear here soon
V2A - Vision to Action: Learning robotic arm actions based on vision and language
Michal Nazarczuk,
Krystian Mikolajczyk
Imperial College London
In Proceedings of the Asian Conference on Computer Vision (ACCV) 2020.
@inproceedings{nazarczuk2020v2a,
title={V2A - Vision to Action: Learning robotic arm actions based on vision and language},
author={Nazarczuk, Michal and Mikolajczyk, Krystian},
booktitle={Proceedings of the Asian Conference on Computer Vision (ACCV)},
year={2020}
}