Skip to content

microsoft/GPT4Vision-Robot-Manipulation-Prompts

Repository files navigation

GPT4Vision-Robot-Manipulation-Prompts

This repository provides the sample code designed to interpret human demonstration videos and convert them into high-level tasks for robots. Our Applied Robotics Research Group believes that this mechanism will serve as an effective interface for humans to instruct humanoid and industrial robots. The prompts are designed for easy customization and seamless integration with existing robot control and visual recognition systems. For more information, please visit our project page and our paper, GPT-4V(ision) for Robotics: Multimodal Task Planning from Human Demonstration. Please note that this repository only contains the sample code for task recognition using GPT-4V. It does not include mechanisms for incorporating user feedback or classical vision systems to recognize affordances from videos. For those components, please explore other repositories we have provided.

Overview of the pipeline:

LfO pipeline

How to use

We have confirmed that the sample codes work with python 3.9.12 and 3.12.2

If you use Azure OpenAI, set these environmental variables

  • AZURE_OPENAI_DEPLOYMENT_NAME
  • AZURE_OPENAI_ENDPOINT
  • AZURE_OPENAI_API_KEY

If you use OpenAI, set this environmental variable

  • OPENAI_API_KEY

Install dependencies

> pip install -r requirements.txt

Run the sample code

python example.py sample_video/sample.mp4 --use_azure

We assume that the video path refers to a mp4 file of human demonstrating something. Add the --use_azure option if you use Azure OpenAI instead onf OpenAI

Bibliography

@article{wake2023gpt,
  title={GPT-4V (ision) for robotics: Multimodal task planning from human demonstration},
  author={Wake, Naoki and Kanehira, Atsushi and Sasabuchi, Kazuhiro and Takamatsu, Jun and Ikeuchi, Katsushi},
  journal={arXiv preprint arXiv:2311.12015},
  year={2023}
}

Contributing

This project welcomes contributions and suggestions. Most contributions require you to agree to a Contributor License Agreement (CLA) declaring that you have the right to, and actually do, grant us the rights to use your contribution. For details, visit https://cla.opensource.microsoft.com.

When you submit a pull request, a CLA bot will automatically determine whether you need to provide a CLA and decorate the PR appropriately (e.g., status check, comment). Simply follow the instructions provided by the bot. You will only need to do this once across all repos using our CLA.

This project has adopted the Microsoft Open Source Code of Conduct. For more information see the Code of Conduct FAQ or contact opencode@microsoft.com with any additional questions or comments.

Trademarks

This project may contain trademarks or logos for projects, products, or services. Authorized use of Microsoft trademarks or logos is subject to and must follow Microsoft's Trademark & Brand Guidelines. Use of Microsoft trademarks or logos in modified versions of this project must not cause confusion or imply Microsoft sponsorship. Any use of third-party trademarks or logos are subject to those third-party's policies.

About

This repository provides the sample code designed to interpret human demonstration videos and convert them into high-level tasks for robots.

Resources

License

Code of conduct

Security policy

Stars

Watchers

Forks

Releases

No releases published

Packages

No packages published

Languages