Skip to content

Problems using BridgeTower Local in Multimodal RAG without PredictionGuard #16

@ixn3rd3mxn

Description

@ixn3rd3mxn

I need help. I am currently studying Multimodal RAG: Chat with Videos
https://www.deeplearning.ai/short-courses/multimodal-rag-chat-with-videos/
In the course, there is a use of bridgetower-large-itm-mlm-itc using predictionguard. When I want to try it on a local laptop, following all the examples in the course, I am currently working on the chapter L4_Multimodal Retrieval from Vector Stores. I am having trouble with bridgetower-large-itm-mlm-itc using predictionguard, which I do not have an API KEY for. So I searched for information on huggingface and found https://huggingface.co/BridgeTower/bridgetower-large-itm-mlm-itc. But the next problem I encountered is how do I make a function to solve this problem?

# helper function to compute the joint embedding of a prompt and a base64-encoded image through PredictionGuard
def bt_embedding_from_prediction_guard(prompt, base64_image):
    # get PredictionGuard client
    client = _getPredictionGuardClient()
    message = {"text": prompt,}
    if base64_image is not None and base64_image != "":
        if not isBase64(base64_image): 
            raise TypeError("image input must be in base64 encoding!")
        message['image'] = base64_image
    response = client.embeddings.create(
        model="bridgetower-large-itm-mlm-itc",
        input=[message]
    )
    return response['data'][0]['embedding']

Can you suggest how I should modify the function to successfully use bridgetower-large-itm-mlm-itc locally?

Metadata

Metadata

Assignees

No one assigned

    Labels

    No labels
    No labels

    Type

    No type

    Projects

    No projects

    Milestone

    No milestone

    Relationships

    None yet

    Development

    No branches or pull requests

    Issue actions