Skip to content
New issue

Have a question about this project? Sign up for a free GitHub account to open an issue and contact its maintainers and the community.

By clicking “Sign up for GitHub”, you agree to our terms of service and privacy statement. We’ll occasionally send you account related emails.

Already on GitHub? Sign in to your account

“1‘ is blocked by safety reason? seriously #126

Closed
Andy963 opened this issue Dec 15, 2023 · 18 comments
Closed

“1‘ is blocked by safety reason? seriously #126

Andy963 opened this issue Dec 15, 2023 · 18 comments
Assignees
Labels
component:python sdk Issue/PR related to Python SDK type:bug Something isn't working

Comments

@Andy963
Copy link
Contributor

Andy963 commented Dec 15, 2023

Description of the bug:

when i send 1 to the gemini pro, it raise a exception :
ValueError: The response.parts quick accessor only works for a single candidate, but none were returned. Check the response.prompt_feedback to see if the prompt was blocked.

then i print the response.prommpt_feedback, i got this:

block_reason: SAFETY
safety_ratings {
category: HARM_CATEGORY_SEXUALLY_EXPLICIT
probability: NEGLIGIBLE
}
safety_ratings {
category: HARM_CATEGORY_HATE_SPEECH
probability: LOW
}
safety_ratings {
category: HARM_CATEGORY_HARASSMENT
probability: MEDIUM
}
safety_ratings {
category: HARM_CATEGORY_DANGEROUS_CONTENT
probability: NEGLIGIBLE
}
them i go to studio :

image

Actual vs expected behavior:

so anybody tell what happend?
No response

Any other information you'd like to share?

No response

@Andy963 Andy963 added component:python sdk Issue/PR related to Python SDK type:bug Something isn't working labels Dec 15, 2023
@DiamondGo
Copy link

The same here. lmfao.

API level censorship is stupid.

@ymodak ymodak self-assigned this Dec 19, 2023
@ymodak ymodak added the status:triaged Issue/PR triaged to the corresponding sub-team label Dec 19, 2023
@alexmavr
Copy link

alexmavr commented Dec 22, 2023

Same here, I'm running the API over an eval dataset and it won't finish the full dataset run. If this is rate limiting then at least clearly say so

@LaMerdaSeca
Copy link

I have the same error in Java: The response is blocked due to safety reason
even in my settings I have:

setSafetySettings(Collections.singletonList(
                    SafetySetting.newBuilder()
                            .setThreshold(SafetySetting
                                    .HarmBlockThreshold.BLOCK_NONE).build()));

@elavalasrinivasreddy
Copy link

Same here. I got a response for below numbers, for remaining numbers same error.
0 : Got the defination of Zero
6 : Got the explanation of Hexagone
10 : Got these subpoints

  1. Computer Programming
  2. Machine Learning
  3. Data Analysis
  4. Web Development
  5. Digital Marketing
  6. Graphic Design
  7. Video Editing
  8. 3D Modeling and Animation
  9. Photography
  10. Music Production

@Roviky
Copy link

Roviky commented Jan 6, 2024

I meet this problem too.How can i run it again?

@HienBM
Copy link

HienBM commented Jan 13, 2024

Try setting the probability with ‘BLOCK_NONE’. I ran it successfully with that.

block_reason: SAFETY
safety_ratings {
category: HARM_CATEGORY_SEXUALLY_EXPLICIT
probability: BLOCK_NONE
}
safety_ratings {
category: HARM_CATEGORY_HATE_SPEECH
probability: BLOCK_NONE
}
safety_ratings {
category: HARM_CATEGORY_HARASSMENT
probability: BLOCK_NONE
}
safety_ratings {
category: HARM_CATEGORY_DANGEROUS_CONTENT
probability: BLOCK_NONE
}

@jacklanda
Copy link

Try setting the probability with ‘BLOCK_NONE’. I ran it successfully with that.

block_reason: SAFETY safety_ratings { category: HARM_CATEGORY_SEXUALLY_EXPLICIT probability: BLOCK_NONE } safety_ratings { category: HARM_CATEGORY_HATE_SPEECH probability: BLOCK_NONE } safety_ratings { category: HARM_CATEGORY_HARASSMENT probability: BLOCK_NONE } safety_ratings { category: HARM_CATEGORY_DANGEROUS_CONTENT probability: BLOCK_NONE }

Where should I set up this argument?

@HienBM
Copy link

HienBM commented Jan 14, 2024

Try setting the probability with ‘BLOCK_NONE’. I ran it successfully with that.
block_reason: SAFETY safety_ratings { category: HARM_CATEGORY_SEXUALLY_EXPLICIT probability: BLOCK_NONE } safety_ratings { category: HARM_CATEGORY_HATE_SPEECH probability: BLOCK_NONE } safety_ratings { category: HARM_CATEGORY_HARASSMENT probability: BLOCK_NONE } safety_ratings { category: HARM_CATEGORY_DANGEROUS_CONTENT probability: BLOCK_NONE }

Where should I set up this argument?

This is my set up

Screenshot 2024-01-14 075915

@jacklanda
Copy link

jacklanda commented Jan 14, 2024

Try setting the probability with ‘BLOCK_NONE’. I ran it successfully with that.
block_reason: SAFETY safety_ratings { category: HARM_CATEGORY_SEXUALLY_EXPLICIT probability: BLOCK_NONE } safety_ratings { category: HARM_CATEGORY_HATE_SPEECH probability: BLOCK_NONE } safety_ratings { category: HARM_CATEGORY_HARASSMENT probability: BLOCK_NONE } safety_ratings { category: HARM_CATEGORY_DANGEROUS_CONTENT probability: BLOCK_NONE }

Where should I set up this argument?

This is my set up

Screenshot 2024-01-14 075915

It does work. A simple but effective solution to it. Thank you so much!

Even dont know what the consideration (AI Safety Detection?) of Gemini team doing this, but I believe this issue can be closed now. Thanks @HienBM for his useful advise.

@Andy963 Andy963 closed this as completed Jan 14, 2024
@github-actions github-actions bot removed the status:triaged Issue/PR triaged to the corresponding sub-team label Jan 14, 2024
@teddythinh
Copy link

teddythinh commented Feb 1, 2024

I'm getting this error while using a for loop to feed the model questions and response answers.

Here is how I configure the model:

generation_config = {
  "candidate_count": 1,
  "max_output_tokens": 256,
  "temperature": 1.0,
  "top_p": 0.7,
}

safety_settings=[
  {
    "category": "HARM_CATEGORY_DANGEROUS",
    "threshold": "BLOCK_NONE",
  },
  {
    "category": "HARM_CATEGORY_HARASSMENT",
    "threshold": "BLOCK_NONE",
  },
  {
    "category": "HARM_CATEGORY_HATE_SPEECH",
    "threshold": "BLOCK_NONE",
  },
  {
    "category": "HARM_CATEGORY_SEXUALLY_EXPLICIT",
    "threshold": "BLOCK_NONE",
  },
  {
    "category": "HARM_CATEGORY_DANGEROUS_CONTENT",
    "threshold": "BLOCK_NONE",
  },
]

model = genai.GenerativeModel(
    model_name="gemini-pro",
    generation_config=generation_config,
    safety_settings=safety_settings
)

The model returns:

 genai.GenerativeModel(
   model_name='models/gemini-pro',
   generation_config={'candidate_count': 1, 'max_output_tokens': 256, 'temperature': 1.0, 'top_p': 0.7}.
   safety_settings={<HarmCategory.HARM_CATEGORY_DANGEROUS_CONTENT: 10>: <HarmBlockThreshold.BLOCK_NONE: 4>, <HarmCategory.HARM_CATEGORY_HARASSMENT: 7>: <HarmBlockThreshold.BLOCK_NONE: 4>, <HarmCategory.HARM_CATEGORY_HATE_SPEECH: 8>: <HarmBlockThreshold.BLOCK_NONE: 4>, <HarmCategory.HARM_CATEGORY_SEXUALLY_EXPLICIT: 9>: <HarmBlockThreshold.BLOCK_NONE: 4>}
)

And here is how I run it:

response=None
timeout_counter=0
    while response is None and timeout_counter<=30:
        try:
            response = model.generate_content(messages)
        except Exception as msg:
            pprint(msg)
            print('sleeping because of exception ...')
            time.sleep(30)
            continue

    if response==None:
        response_str=""
    else:
        response_str = response.text # <- This line gets error

@Marwa-Essam81
Copy link

I am also getting block_reason: OTHER even through I have set all safety settings to BLOCK_NONE

@Yinhance
Copy link

Yinhance commented Feb 7, 2024

I am also getting block_reason: OTHER even through I have set all safety settings to BLOCK_NONE

Have u solved this promble?

@deepak032002
Copy link

Try this it works for me -

import {
GoogleGenerativeAI,
HarmCategory,
HarmBlockThreshold,
} from '@google/generative-ai';
import { ConfigService } from '@nestjs/config';

const config = new ConfigService();

export async function generateText(data: string, type: 'title' | 'content') {

const genAI = new GoogleGenerativeAI(config.get('GOOGLE_GEMINI_API_KEY'));
const model = genAI.getGenerativeModel({ model: 'gemini-pro' });

const generationConfig = {
temperature: 0.9,
topK: 1,
topP: 1,
maxOutputTokens: 2048,
};

const safetySettings = [
{
category: HarmCategory.HARM_CATEGORY_HARASSMENT,
threshold: HarmBlockThreshold.BLOCK_NONE,
},
{
category: HarmCategory.HARM_CATEGORY_HATE_SPEECH,
threshold: HarmBlockThreshold.BLOCK_NONE,
},
{
category: HarmCategory.HARM_CATEGORY_SEXUALLY_EXPLICIT,
threshold: HarmBlockThreshold.BLOCK_NONE,
},
{
category: HarmCategory.HARM_CATEGORY_DANGEROUS_CONTENT,
threshold: HarmBlockThreshold.BLOCK_NONE,
},
];

const history = [];

const chat = model.startChat({
generationConfig,
safetySettings,
history,
});

let msg: string = "YOUR_MESSAGE";

const result = await chat.sendMessage(msg);
const response = result.response;
const text = response.text();
return text.replaceAll('\n', '
');
}

@gunsterpsp
Copy link

GoogleGenerativeAIResponseError: [GoogleGenerativeAI Error]: Candidate was blocked due to RECITATION
at response.text (file:///F:/oliverbackup/flutter_app/backend/node_modules/@google/generative-ai/dist/index.mjs:265:23)
at getAIResponse (file:///F:/oliverbackup/flutter_app/backend/controllers/MessagesController.js:148:31)
at process.processTicksAndRejections (node:internal/process/task_queues:95:5) {
response: {
candidates: [ [Object] ],
promptFeedback: { safetyRatings: [Array] },
text: [Function (anonymous)]
}
}

how about this one?

@shreyash-99
Copy link

I am also getting block_reason: OTHER even through I have set all safety settings to BLOCK_NONE

Have u solved this promble?

hey, have you figured how to solve this problem?

@jacklanda
Copy link

Please consider reopening this closed issue or filing another one for further discussion.

@miRx923
Copy link

miRx923 commented Mar 26, 2024

I'm here to tell you how I avoided the "block_reason: OTHER" error.

I have no clue why the error occurs and what does it mean, but if you're fine with just skipping the prompt that causes the error, there is a fix for it. I'm analysing sentiment of the rewievs in a .csv file and if the "block_reason: OTHER" occurs I just return "incorrect" instead of "positive" or "negative". If this method doesn't fit in your use case, you can try to adjust it.

Python code:

    safety_settings=[
    {
        "category": "HARM_CATEGORY_HARASSMENT",
        "threshold": "BLOCK_NONE",
    },
    {
        "category": "HARM_CATEGORY_HATE_SPEECH",
        "threshold": "BLOCK_NONE",
    },
    {
        "category": "HARM_CATEGORY_SEXUALLY_EXPLICIT",
        "threshold": "BLOCK_NONE",
    },
    {
        "category": "HARM_CATEGORY_DANGEROUS_CONTENT",
        "threshold": "BLOCK_NONE",
    },
    ]

    model = genai.GenerativeModel("gemini-pro", generation_config=generation_config, safety_settings=safety_settings)
    chat = model.start_chat(history=[])

      try:
          response = chat.send_message(prompt)
          
          if response:
              output = response.text
              sentiment = output.strip()
              
      
      # Handle the blocked prompt exception
      except genai.types.generation_types.BlockedPromptException as e:
          print(f"Prompt blocked due to: {e}")

          return "incorrect"

      time.sleep(0.25) # you can skip this line

      return sentiment

This way, when I encounter the error it just returns "incorrect" and continues with the next prompt. Hope I helped. ❤️

@pranavkshirsagar1924
Copy link

well In my case the problem was due to explicit prompt

Sign up for free to join this conversation on GitHub. Already have an account? Sign in to comment
Labels
component:python sdk Issue/PR related to Python SDK type:bug Something isn't working
Projects
None yet
Development

No branches or pull requests