CLIP (Contrastive Language–Image Pre-training) for Bangla.
-
Updated
Jul 13, 2024 - Python
CLIP (Contrastive Language–Image Pre-training) for Bangla.
Search for images using text and images.
Computationally-free personalization at test time for sEMG gesture classification. Fast (gpu/cpu) ninapro API.
Text2ImageDescription retrieves relevant images from Pascal VOC 2012 dataset using OpenAI CLIP, based on text queries, and generates descriptions using quantized Mistral-7b model.
Generation of faces, numbers and images...And Stable-Diffusion Inpainting through Segmentation through SAM and CLIP Model
Official implementation for "Blended Diffusion for Text-driven Editing of Natural Images" [CVPR 2022]
Simple implementation of OpenAI CLIP model in PyTorch.
Object tracking implemented with the Roboflow Inference API, DeepSort, and OpenAI CLIP.
Sort a folder of images according to their similarity with provided text in your browser (uses a browser-ported version of OpenAI's CLIP model and the web's new File System Access API)
🎯 Task-oriented embedding tuning for BERT, CLIP, etc.
ChatSense - Llama 2 + Code Llama + CLIP based Chatbot
Deep learning pet breed recognition app
An experiment with movie scenes and contrastive learning
CLIP as a service - Embed image and sentences, object recognition, visual reasoning, image classification and reverse image search
Text to Image & Reverse Image Search Engine built upon Vector Similarity Search utilizing CLIP VL-Transformer for Semantic Embeddings & Qdrant as the Vector-Store
Feed forward VQGAN-CLIP model, where the goal is to eliminate the need for optimizing the latent space of VQGAN for each input prompt
A list of projects that use OpenAI's CLIP model.
Add a description, image, and links to the openai-clip topic page so that developers can more easily learn about it.
To associate your repository with the openai-clip topic, visit your repo's landing page and select "manage topics."