🎯 Task-oriented embedding tuning for BERT, CLIP, etc.
-
Updated
Mar 11, 2024 - Python
🎯 Task-oriented embedding tuning for BERT, CLIP, etc.
A CLI tool/python module for generating images from text using guided diffusion and CLIP from OpenAI.
Just playing with getting CLIP Guided Diffusion running locally, rather than having to use colab.
Object tracking implemented with the Roboflow Inference API, DeepSort, and OpenAI CLIP.
Feed forward VQGAN-CLIP model, where the goal is to eliminate the need for optimizing the latent space of VQGAN for each input prompt
KoCLIP: Korean port of OpenAI CLIP, in Flax
An easy to use, user-friendly and efficient code for extracting OpenAI CLIP (Global/Grid) features from image and text respectively.
Run CLIP inference on the ImageNet dataset and use these inferences as labels to train other models and again evaluate the trained model on Imagenet validation dataset using original labels or CLIP labels
A dead-simple image search and image-text matching system for Bangla using CLIP
CLIP (Contrastive Language–Image Pre-training) for Bangla.
OpenAI's CLIP neural network
Computationally-free personalization at test time for sEMG gesture classification. Fast (gpu/cpu) ninapro API.
CLIFS (CLIP-based Frame Selection) is a Python function that takes in a video file and a text prompt as input, and uses the CLIP (Contrastive Language-Image Pre-training) model to find the frame in the video that is most similar to the given text prompt.
ChatSense - Llama 2 + Code Llama + CLIP based Chatbot
Group images by provided labels using OpenAI/CLIP
GUI to explore large image collections with text queries
Add a description, image, and links to the openai-clip topic page so that developers can more easily learn about it.
To associate your repository with the openai-clip topic, visit your repo's landing page and select "manage topics."