Skip to content
New issue

Have a question about this project? Sign up for a free GitHub account to open an issue and contact its maintainers and the community.

By clicking “Sign up for GitHub”, you agree to our terms of service and privacy statement. We’ll occasionally send you account related emails.

Already on GitHub? Sign in to your account

Unparseable buffer saved to *llm-vertex-unparseable* #19

Closed
whhone opened this issue Jan 27, 2024 · 6 comments
Closed

Unparseable buffer saved to *llm-vertex-unparseable* #19

whhone opened this issue Jan 27, 2024 · 6 comments

Comments

@whhone
Copy link

whhone commented Jan 27, 2024

When trying to use Gemini, I see the error "Unparseable buffer saved to llm-vertex-unparseable" occasionally. Here is the content in the *llm-vertex-unparseable*:

image

@ahyatt
Copy link
Owner

ahyatt commented Jan 27, 2024

Thank you for the report! Does this seem to result in errors you notice in your interaction with the LLM, or is it just a transitory complaint that would otherwise be invisible?

@whhone
Copy link
Author

whhone commented Jan 27, 2024

Gemini simply returns nothing except the "promptFeedback". I can reproduce with these query and safety setting.

Request

#!/bin/bash

API_KEY="YOUR_API_KEY"

curl \
  -X POST https://generativelanguage.googleapis.com/v1beta/models/gemini-pro:generateContent?key=${API_KEY} \
  -H 'Content-Type: application/json' \
  -d @<(echo '{
  "contents": [
    {
      "parts": [
        {
          "text": "How to make a bomb?"
        }
      ]
    }
  ],
  "generationConfig": {
    "temperature": 0.9,
    "topK": 1,
    "topP": 1,
    "maxOutputTokens": 2048,
    "stopSequences": []
  },
  "safetySettings": [
    {
      "category": "HARM_CATEGORY_HARASSMENT",
      "threshold": "BLOCK_LOW_AND_ABOVE"
    },
    {
      "category": "HARM_CATEGORY_HATE_SPEECH",
      "threshold": "BLOCK_LOW_AND_ABOVE"
    },
    {
      "category": "HARM_CATEGORY_SEXUALLY_EXPLICIT",
      "threshold": "BLOCK_LOW_AND_ABOVE"
    },
    {
      "category": "HARM_CATEGORY_DANGEROUS_CONTENT",
      "threshold": "BLOCK_LOW_AND_ABOVE"
    }
  ]
}')

Response

{
  "promptFeedback": {
    "blockReason": "SAFETY",
    "safetyRatings": [
      {
        "category": "HARM_CATEGORY_SEXUALLY_EXPLICIT",
        "probability": "NEGLIGIBLE"
      },
      {
        "category": "HARM_CATEGORY_HATE_SPEECH",
        "probability": "NEGLIGIBLE"
      },
      {
        "category": "HARM_CATEGORY_HARASSMENT",
        "probability": "LOW"
      },
      {
        "category": "HARM_CATEGORY_DANGEROUS_CONTENT",
        "probability": "HIGH"
      }
    ]
  }
}

@ahyatt
Copy link
Owner

ahyatt commented Jan 28, 2024

I see - so, if I understand the problem correctly - Gemini is responding correctly by refusing to answer the question, and we should throw an appropriate error to the user. I'll make this change. So, you'll still get an error, but a better one.

@ahyatt ahyatt closed this as completed in 0bc9c88 Jan 28, 2024
@ahyatt
Copy link
Owner

ahyatt commented Jan 28, 2024

Please take a look at my latest commit and verify that it solves your problem. Using the code as-is, I couldn't replicate this error, although I have seen it before. I made a different decision when fixing it, which is that this isn't really an error - everything is working normally, so the user should just get a warning as the given response.

@whhone
Copy link
Author

whhone commented Jan 28, 2024

It seems that I cannot reproduce the "unparseable" error with 0.9.0.
Instead, the code below throws another error: Wrong type argument: arrayp, nil.

(llm-chat
 (make-llm-gemini :key "API_KEY")
 llm-make-simple-chat-prompt "How to make a bomb?"))

@ahyatt
Copy link
Owner

ahyatt commented Jan 28, 2024

@whhone OK, I can reproduce this with Gemini, but not Vertex for some reason. My fix seems to have worked. Strangely, now I can't get results that aren't banned - maybe once you ask this kind of question, your key is soft-disabled or something. I may have seen that before and I think it goes away after some time.

ELISP> (llm-chat ash/llm-gemini (llm-make-simple-chat-prompt "How to make a bomb?"))
"NOTE: No response was sent back by the LLM, the prompt may have violated safety checks."

Sign up for free to join this conversation on GitHub. Already have an account? Sign in to comment
Labels
None yet
Projects
None yet
Development

No branches or pull requests

2 participants