Skip to content
This repository was archived by the owner on Jul 4, 2025. It is now read-only.

Conversation

@tikikun
Copy link
Contributor

@tikikun tikikun commented Dec 14, 2023

No description provided.

@tikikun tikikun self-assigned this Dec 14, 2023
@tikikun tikikun linked an issue Dec 14, 2023 that may be closed by this pull request
@tikikun tikikun merged commit 0cd4abe into main Dec 15, 2023
@tikikun
Copy link
Contributor Author

tikikun commented Dec 15, 2023

example of loading LLAVA:

curl -X POST 'http://localhost:3928/inferences/llamacpp/loadModel' \
-H 'Content-Type: application/json' \
-d '{
    "llama_model_path": "/Users/alandao/Downloads/ggml-model-q4_k.gguf","mmproj":"/Users/alandao/Downloads/mmproj-model-f16.gguf"
    ,"ctx_len": 4096,
    "ngl": 100, "cont_batching":false, "embedding":false}'

Example of doing inferencing with image

import base64
import requests

# OpenAI API Key
api_key = "YOUR_OPENAI_API_KEY"

# Function to encode the image
def encode_image(image_path):
    with open(image_path, "rb") as image_file:
        return base64.b64encode(image_file.read()).decode('utf-8')


headers = {
    'Content-Type': 'application/json',
    'Authorization': f'Bearer {api_key}',
}

# Define the image_b64 variable here
image_b64 = encode_image('/Users/alandao/Downloads/download.jpeg')

json_data = {
    "model": "text-davinci-003",
    "messages": [
        {"role": "assistant", "content": "Hello there 👋"},
        {
            "role": "user",
            "content": [
                {"type": "text", "text": "Describe this image"},
                {
                    "type": "image_url",
                    "image_url": {
                        "url": f"data:image/jpeg;base64,{image_b64}"
                    },
                },
            ],
        },
    ],
    "stream": False,
}


def print_streaming_response(response):
    for chunk in response.iter_lines():
        if chunk:
            print(chunk.decode('utf-8'))


response = requests.post('http://localhost:3928/v1/chat/completions', headers=headers, json=json_data)
print_streaming_response(response)

@tikikun
Copy link
Contributor Author

tikikun commented Dec 15, 2023

@vuonghoainam

@tikikun
Copy link
Contributor Author

tikikun commented Dec 15, 2023

@hiro-v hiro-v deleted the 103-feat-enable-llava-feature-in-nitro-1 branch December 15, 2023 04:05
Sign up for free to subscribe to this conversation on GitHub. Already have an account? Sign in.

Labels

None yet

Projects

No open projects
Archived in project

Development

Successfully merging this pull request may close these issues.

feat: Enable LLaVA feature in nitro

3 participants