-
Notifications
You must be signed in to change notification settings - Fork 232
New issue
Have a question about this project? Sign up for a free GitHub account to open an issue and contact its maintainers and the community.
By clicking “Sign up for GitHub”, you agree to our terms of service and privacy statement. We’ll occasionally send you account related emails.
Already on GitHub? Sign in to your account
“1‘ is blocked by safety reason? seriously #126
Comments
The same here. lmfao. API level censorship is stupid. |
Same here, I'm running the API over an eval dataset and it won't finish the full dataset run. If this is rate limiting then at least clearly say so |
I have the same error in Java: The response is blocked due to safety reason
|
Same here. I got a response for below numbers, for remaining numbers same error.
|
I meet this problem too.How can i run it again? |
Try setting the probability with ‘BLOCK_NONE’. I ran it successfully with that. block_reason: SAFETY |
Where should I set up this argument? |
This is my set up |
It does work. A simple but effective solution to it. Thank you so much! Even dont know what the consideration (AI Safety Detection?) of Gemini team doing this, but I believe this issue can be closed now. Thanks @HienBM for his useful advise. |
I'm getting this error while using a for loop to feed the model questions and response answers. Here is how I configure the model: generation_config = {
"candidate_count": 1,
"max_output_tokens": 256,
"temperature": 1.0,
"top_p": 0.7,
}
safety_settings=[
{
"category": "HARM_CATEGORY_DANGEROUS",
"threshold": "BLOCK_NONE",
},
{
"category": "HARM_CATEGORY_HARASSMENT",
"threshold": "BLOCK_NONE",
},
{
"category": "HARM_CATEGORY_HATE_SPEECH",
"threshold": "BLOCK_NONE",
},
{
"category": "HARM_CATEGORY_SEXUALLY_EXPLICIT",
"threshold": "BLOCK_NONE",
},
{
"category": "HARM_CATEGORY_DANGEROUS_CONTENT",
"threshold": "BLOCK_NONE",
},
]
model = genai.GenerativeModel(
model_name="gemini-pro",
generation_config=generation_config,
safety_settings=safety_settings
) The model returns:
And here is how I run it: response=None
timeout_counter=0
while response is None and timeout_counter<=30:
try:
response = model.generate_content(messages)
except Exception as msg:
pprint(msg)
print('sleeping because of exception ...')
time.sleep(30)
continue
if response==None:
response_str=""
else:
response_str = response.text # <- This line gets error |
I am also getting block_reason: OTHER even through I have set all safety settings to BLOCK_NONE |
Have u solved this promble? |
Try this it works for me - import { const config = new ConfigService(); export async function generateText(data: string, type: 'title' | 'content') { const genAI = new GoogleGenerativeAI(config.get('GOOGLE_GEMINI_API_KEY')); const generationConfig = { const safetySettings = [ const history = []; const chat = model.startChat({ let msg: string = "YOUR_MESSAGE"; const result = await chat.sendMessage(msg); |
GoogleGenerativeAIResponseError: [GoogleGenerativeAI Error]: Candidate was blocked due to RECITATION how about this one? |
hey, have you figured how to solve this problem? |
Please consider reopening this closed issue or filing another one for further discussion. |
- Integration with AI and MongoDB Local DB - Mongoose and MongoDB Controllers added - Google Error => Log and Issues identified and commented google-gemini/generative-ai-python#126
I'm here to tell you how I avoided the "block_reason: OTHER" error. I have no clue why the error occurs and what does it mean, but if you're fine with just skipping the prompt that causes the error, there is a fix for it. I'm analysing sentiment of the rewievs in a .csv file and if the "block_reason: OTHER" occurs I just return "incorrect" instead of "positive" or "negative". If this method doesn't fit in your use case, you can try to adjust it. Python code:
This way, when I encounter the error it just returns "incorrect" and continues with the next prompt. Hope I helped. ❤️ |
well In my case the problem was due to explicit prompt |
Description of the bug:
when i send 1 to the gemini pro, it raise a exception :
ValueError: The
response.parts
quick accessor only works for a single candidate, but none were returned. Check theresponse.prompt_feedback
to see if the prompt was blocked.then i print the response.prommpt_feedback, i got this:
block_reason: SAFETY
safety_ratings {
category: HARM_CATEGORY_SEXUALLY_EXPLICIT
probability: NEGLIGIBLE
}
safety_ratings {
category: HARM_CATEGORY_HATE_SPEECH
probability: LOW
}
safety_ratings {
category: HARM_CATEGORY_HARASSMENT
probability: MEDIUM
}
safety_ratings {
category: HARM_CATEGORY_DANGEROUS_CONTENT
probability: NEGLIGIBLE
}
them i go to studio :
Actual vs expected behavior:
so anybody tell what happend?
No response
Any other information you'd like to share?
No response
The text was updated successfully, but these errors were encountered: