🏄 Scalable embedding, reasoning, ranking for images and sentences with CLIP
-
Updated
Jan 23, 2024 - Python
🏄 Scalable embedding, reasoning, ranking for images and sentences with CLIP
X-modaler is a versatile and high-performance codebase for cross-modal analytics(e.g., image captioning, video captioning, vision-language pre-training, visual question answering, visual commonsense reasoning, and cross-modal retrieval).
The Paper List of Large Multi-Modality Model, Parameter-Efficient Finetuning, Vision-Language Pretraining, Conventional Image-Text Matching for Preliminary Insight.
Knowledge Graphs Meet Multi-Modal Learning: A Comprehensive Survey
TOMM2020 Dual-Path Convolutional Image-Text Embedding 🐾 https://arxiv.org/abs/1711.05535
Offline semantic Text-to-Image and Image-to-Image search on Android powered by quantized state-of-the-art vision-language pretrained CLIP model and ONNX Runtime inference engine
[AAAI2021] The code of “Similarity Reasoning and Filtration for Image-Text Matching”
Code for "Learning the Best Pooling Strategy for Visual Semantic Embedding", CVPR 2021 (Oral)
Deep Supervised Cross-modal Retrieval (CVPR 2019, PyTorch Code)
Polysemous Visual-Semantic Embedding for Cross-Modal Retrieval (CVPR 2019)
Official Pytorch implementation of "Probabilistic Cross-Modal Embedding" (CVPR 2021)
[NeurIPS 2022 Spotlight] Expectation-Maximization Contrastive Learning for Compact Video-and-Language Representations
[ICCV 2023] DiffusionRet: Generative Text-Video Retrieval with Diffusion Model
PyTorch code for BagFormer: Better Cross-Modal Retrieval via bag-wise interaction
[CVPR 2023 Highlight] Video-Text as Game Players: Hierarchical Banzhaf Interaction for Cross-Modal Representation Learning
Official implementation of "Contrastive Audio-Language Learning for Music" (ISMIR 2022)
[CVPR 2020, Oral] "Sketch Less for More: On-the-Fly Fine-Grained Sketch Based Image Retrieval”, IEEE Conf. on Computer Vision and Pattern Recognition (CVPR), 2020. .
Extended COCO Validation (ECCV) Caption dataset (ECCV 2022)
Official Pytorch implementation of "Improved Probabilistic Image-Text Representations" (ICLR 2024)
Learning Cross-Modal Retrieval with Noisy Labels (CVPR 2021, PyTorch Code)
Add a description, image, and links to the cross-modal-retrieval topic page so that developers can more easily learn about it.
To associate your repository with the cross-modal-retrieval topic, visit your repo's landing page and select "manage topics."