Skip to content

roboflow/awesome-openai-vision-api-experiments

Folders and files

NameName
Last commit message
Last commit date

Latest commit

 

History

40 Commits
 
 
 
 
 
 
 
 
 
 
 
 

Repository files navigation

openai vision api experiments 🧪

👋 Hello

The must-have resource for anyone who wants to experiment with and build on the OpenAI Vision API. This repository serves as a hub for innovative experiments, showcasing a variety of applications ranging from simple image classifications to advanced zero-shot learning models. It's a space for both beginners and experts to explore the capabilities of the Vision API, share their findings, and collaborate on pushing the boundaries of visual AI.

Experimenting with the OpenAI API requires an API 🔑. You can get one here.

⚠️ Limitations

  • 100 API requests per single API key per day.
  • Can't be used for object detection or image segmentation. We can solve this problem by combining GPT-4V with foundational models like GroundingDINO or Segment Anything (SAM). Please take a look at the example and read our blog post.

🧪 Experiments

experiment complementary materials authors
WebcamGPT - chat with video stream GitHub Gradio @SkalskiP
HotDogGPT - simple image classification application GitHub Gradio @SkalskiP
zero-shot image classifier with GPT-4V GitHub @capjamesg
zero-shot object detection with GroundingDINO + GPT-4V GitHub Gradio @capjamesg
GPT-4V vs. CLIP GitHub @capjamesg
GPT-4V with Set-of-Mark (SoM) GitHub Jianwei Yang, Hao Zhang, Feng Li, Xueyan Zou, Chunyuan Li, Jianfeng Gao
GPT-4V on Web GitHub @Jiayi-Pan
automated voiceover of NBA game GitHub Colab @SkalskiP
webcamgpt.mov

🗞️ Must Read Papers

🖊️ Blogs

🦸 Contribution

We would love your help in making this repository even better! Whether you want to add a new experiment or have any suggestions for improvement, feel free to open an issue or pull request.

If you are up to the task and want to add a new experiment, please look at our contribution guide. There you can find all the information you need.