Add support for multimodal embeddings from Google Vertex AI #13400
Labels
Ɑ: embeddings
Related to text embedding models module
🤖:enhancement
A large net-new component, integration, or chain. Use sparingly. The largest features
Ɑ: vector store
Related to vector store module
Feature request
Currently no support for multi-modal embeddings from VertexAI exists. However, I did stumble upon this experimental implementation of GoogleVertexAIMultimodalEmbeddings in LangChain for Javascript. Hence, I think this would also be a very nice feature to implement in the Python version of LangChain.
Motivation
Using multi-modal embeddings could positively affect applications that rely on information of different modalities. One example could be product search in a web catalogue. Since more cloud providers are making endpoints for multi-modal embeddings available, it makes sense to incorporate these into LangChain as well. The embeddings of these endpoints could be stored in vector stores and hence be used in downstream applications that are built using LangChain.
Your contribution
I can contribute to this feature.
The text was updated successfully, but these errors were encountered: