Skip to content
New issue

Have a question about this project? Sign up for a free GitHub account to open an issue and contact its maintainers and the community.

By clicking “Sign up for GitHub”, you agree to our terms of service and privacy statement. We’ll occasionally send you account related emails.

Already on GitHub? Sign in to your account

ValueError: The response.text quick accessor only works for simple (single-Part) text responses. This response is not simple text.Use the result.parts accessor or the full result.candidates[index].content.parts lookup instead. #170

Closed
HienBM opened this issue Jan 13, 2024 · 22 comments
Labels
component:python sdk Issue/PR related to Python SDK type:bug Something isn't working

Comments

@HienBM
Copy link

HienBM commented Jan 13, 2024

Description of the bug:

Can someone help me check this error? I still ran successfully yesterday with the same code

File ~/cluster-env/trident_env/lib/python3.10/site-packages/pandas/core/series.py:4630, in Series.apply(self, func, convert_dtype, args, **kwargs)
4520 def apply(
4521 self,
4522 func: AggFuncType,
(...)
4525 **kwargs,
4526 ) -> DataFrame | Series:
4527 """
4528 Invoke function on values of Series.
4529
(...)
4628 dtype: float64
4629 """
-> 4630 return SeriesApply(self, func, convert_dtype, args, kwargs).apply()

File ~/cluster-env/trident_env/lib/python3.10/site-packages/pandas/core/apply.py:1025, in SeriesApply.apply(self)
1022 return self.apply_str()
1024 # self.f is Callable
-> 1025 return self.apply_standard()

File ~/cluster-env/trident_env/lib/python3.10/site-packages/pandas/core/apply.py:1076, in SeriesApply.apply_standard(self)
1074 else:
1075 values = obj.astype(object)._values
-> 1076 mapped = lib.map_infer(
1077 values,
1078 f,
1079 convert=self.convert_dtype,
1080 )
1082 if len(mapped) and isinstance(mapped[0], ABCSeries):
1083 # GH#43986 Need to do list(mapped) in order to get treated as nested
1084 # See also GH#25959 regarding EA support
1085 return obj._constructor_expanddim(list(mapped), index=obj.index)

File ~/cluster-env/trident_env/lib/python3.10/site-packages/pandas/_libs/lib.pyx:2834, in pandas._libs.lib.map_infer()

Cell In[116], line 82, in extract_absa_with_few_shot_gemini(text)
80 response.resolve()
81 time.sleep(1)
---> 82 return list_of_dict_to_string(string_to_list_dict(response.text.lower()))

File ~/cluster-env/trident_env/lib/python3.10/site-packages/google/generativeai/types/generation_types.py:328, in BaseGenerateContentResponse.text(self)
326 parts = self.parts
327 if len(parts) != 1 or "text" not in parts[0]:
--> 328 raise ValueError(
329 "The response.text quick accessor only works for "
330 "simple (single-Part) text responses. This response is not simple text."
331 "Use the result.parts accessor or the full "
332 "result.candidates[index].content.parts lookup "
333 "instead."
334 )
335 return parts[0].text

ValueError: The response.text quick accessor only works for simple (single-Part) text responses. This response is not simple text.Use the result.parts accessor or the full result.candidates[index].content.parts lookup instead.

Actual vs expected behavior:

No response

Any other information you'd like to share?

No response

@HienBM HienBM added component:python sdk Issue/PR related to Python SDK type:bug Something isn't working labels Jan 13, 2024
@falahgs
Copy link

falahgs commented Jan 14, 2024

me too i have the same error especially with Gemini vision

@Vital1162
Copy link

Vital1162 commented Jan 14, 2024

You could try to delete max_output_tokens generation_config in the model if you use that

@HienBM
Copy link
Author

HienBM commented Jan 15, 2024

You could try to delete the generation_config in the model if you use that

It works for me. Thanks @ydm20231608

@HienBM HienBM closed this as completed Jan 15, 2024
@Ki-Zhang
Copy link

You could try to delete the generation_config in the model if you use that

I also encountered this problem. Where is the generation_config file that needs to be deleted? @HienBM

@HienBM
Copy link
Author

HienBM commented Jan 15, 2024

Hi @Ki-Zhang ,

When you set up your model, the generation_config is in it. Try ignoring it like this.

image

@Ki-Zhang
Copy link

Thank you for your answer @HienBM
But I just used model = genai.GenerativeModel('gemini-pro-vision') to simply set up the genimi-pro-vision model. And I encountered the same problem.

Cell In[31], line 23, in dectect_object_why(ori_img, head_pos, gaze_pos)
      8 response = model.generate_content(
      9     [
     10         "The person outlined in the blue frame is looking at what object is marked with the red circle in the picture?",
   (...)
     20     stream=True
     21 )
     22 response.resolve()
---> 23 to_markdown(response.text)
     24 return response.text

File ~/miniconda3/envs/gemini/lib/python3.9/site-packages/google/generativeai/types/generation_types.py:328, in BaseGenerateContentResponse.text(self)
    326 parts = self.parts
    327 if len(parts) != 1 or "text" not in parts[0]:
--> 328     raise ValueError(
    329         "The `response.text` quick accessor only works for "
    330         "simple (single-`Part`) text responses. This response is not simple text."
    331         "Use the `result.parts` accessor or the full "
    332         "`result.candidates[index].content.parts` lookup "
    333         "instead."
    334     )
    335 return parts[0].text

ValueError: The `response.text` quick accessor only works for simple (single-`Part`) text responses. This response is not simple text.Use the `result.parts` accessor or the full `result.candidates[index].content.parts` lookup instead.

I don't know how to solve this problem. But this problem does not occur when I use other image examples to input the model.

@FareedKhan-dev
Copy link

@Ki-Zhang As of January 2024, the entire list of Harm Categories can be found here. The implementation for gemini-pro or gemini-pro-vision can be carried out as follows in Python:

safety_settings = [
    {
        "category": "HARM_CATEGORY_DANGEROUS",
        "threshold": "BLOCK_NONE",
    },
    {
        "category": "HARM_CATEGORY_HARASSMENT",
        "threshold": "BLOCK_NONE",
    },
    {
        "category": "HARM_CATEGORY_HATE_SPEECH",
        "threshold": "BLOCK_NONE",
    },
    {
        "category": "HARM_CATEGORY_SEXUALLY_EXPLICIT",
        "threshold": "BLOCK_NONE",
    },
    {
        "category": "HARM_CATEGORY_DANGEROUS_CONTENT",
        "threshold": "BLOCK_NONE",
    },
]

values for each category can be found here

Threshold (Google AI Studio) Threshold (API) Description
Block none BLOCK_NONE Always show regardless of probability of unsafe content
Block few BLOCK_ONLY_HIGH Block when high probability of unsafe content
Block some BLOCK_MEDIUM_AND_ABOVE Block when medium or high probability of unsafe content
Block most BLOCK_LOW_AND_ABOVE Block when low, medium, or high probability of unsafe content
HARM_BLOCK_THRESHOLD_UNSPECIFIED Threshold is unspecified, block using default threshold

These settings can be applied as:

# For image model 
image_model.generate_content([your_image, prompt], safety_settings=safety_settings)

# For text model 
text_model.generate_content(prompt, safety_settings=safety_settings)

Additionally, make sure the image does not contain content related to openAI or chatgpt. Otherwise, it may result in an error. Screenshots taken through the default Snipping Tool on Windows might also lead to such errors.

@chrbsg
Copy link

chrbsg commented Jan 19, 2024

So this is caused because content was blocked on the server-side? If so, the thrown exception text is terrible.

@Immortalise
Copy link

@Ki-Zhang As of January 2024, the entire list of Harm Categories can be found here. The implementation for gemini-pro or gemini-pro-vision can be carried out as follows in Python:

safety_settings = [
    {
        "category": "HARM_CATEGORY_DANGEROUS",
        "threshold": "BLOCK_NONE",
    },
    {
        "category": "HARM_CATEGORY_HARASSMENT",
        "threshold": "BLOCK_NONE",
    },
    {
        "category": "HARM_CATEGORY_HATE_SPEECH",
        "threshold": "BLOCK_NONE",
    },
    {
        "category": "HARM_CATEGORY_SEXUALLY_EXPLICIT",
        "threshold": "BLOCK_NONE",
    },
    {
        "category": "HARM_CATEGORY_DANGEROUS_CONTENT",
        "threshold": "BLOCK_NONE",
    },
]

values for each category can be found here

Threshold (Google AI Studio) Threshold (API) Description
Block none BLOCK_NONE Always show regardless of probability of unsafe content
Block few BLOCK_ONLY_HIGH Block when high probability of unsafe content
Block some BLOCK_MEDIUM_AND_ABOVE Block when medium or high probability of unsafe content
Block most BLOCK_LOW_AND_ABOVE Block when low, medium, or high probability of unsafe content
HARM_BLOCK_THRESHOLD_UNSPECIFIED Threshold is unspecified, block using default threshold
These settings can be applied as:

# For image model 
image_model.generate_content([your_image, prompt], safety_settings=safety_settings)

# For text model 
text_model.generate_content(prompt, safety_settings=safety_settings)

Additionally, make sure the image does not contain content related to openAI or chatgpt. Otherwise, it may result in an error. Screenshots taken through the default Snipping Tool on Windows might also lead to such errors.

Thanks for provding this! However, the safety settings does not work for me, instead, changing temperature from 0 to 0.7 works. The generated contents may be blocked since I found my input question is about black people (from MMLU dataset).

@zengsihang
Copy link

@Ki-Zhang As of January 2024, the entire list of Harm Categories can be found here. The implementation for gemini-pro or gemini-pro-vision can be carried out as follows in Python:

safety_settings = [
    {
        "category": "HARM_CATEGORY_DANGEROUS",
        "threshold": "BLOCK_NONE",
    },
    {
        "category": "HARM_CATEGORY_HARASSMENT",
        "threshold": "BLOCK_NONE",
    },
    {
        "category": "HARM_CATEGORY_HATE_SPEECH",
        "threshold": "BLOCK_NONE",
    },
    {
        "category": "HARM_CATEGORY_SEXUALLY_EXPLICIT",
        "threshold": "BLOCK_NONE",
    },
    {
        "category": "HARM_CATEGORY_DANGEROUS_CONTENT",
        "threshold": "BLOCK_NONE",
    },
]

values for each category can be found here

Threshold (Google AI Studio) Threshold (API) Description
Block none BLOCK_NONE Always show regardless of probability of unsafe content
Block few BLOCK_ONLY_HIGH Block when high probability of unsafe content
Block some BLOCK_MEDIUM_AND_ABOVE Block when medium or high probability of unsafe content
Block most BLOCK_LOW_AND_ABOVE Block when low, medium, or high probability of unsafe content
HARM_BLOCK_THRESHOLD_UNSPECIFIED Threshold is unspecified, block using default threshold
These settings can be applied as:

# For image model 
image_model.generate_content([your_image, prompt], safety_settings=safety_settings)

# For text model 
text_model.generate_content(prompt, safety_settings=safety_settings)

Additionally, make sure the image does not contain content related to openAI or chatgpt. Otherwise, it may result in an error. Screenshots taken through the default Snipping Tool on Windows might also lead to such errors.

Thanks for the information. However, both the safety settings and temperature don't work for me. The contents are in the biomedical domain, and all other cases are successfully generated except for one. I don't know why it fails to generate for that one.

@HienBM
Copy link
Author

HienBM commented Feb 9, 2024

Try set up

for candidate in response.candidates:
            return [part.text for part in candidate.content.parts]

instead of
response.text

It worked for me

@EMichaelC
Copy link

EMichaelC commented Feb 12, 2024

This is probably happening because you are getting a finish_reason of Recitation for your chosen candidate:

finish_reason
// This field may be populated with recitation information for any text
// included in the content. These are passages that are "recited" from
// copyrighted material in the foundational LLM's training data.

So you may need to simply choose another candidate or run again for a new result that doesn't infringe on copyright.

@shrijayan
Copy link

You could try to delete the generation_config in the model if you use that

This also works for me but why may be the reason for that?

@saramirabi
Copy link

You could try to delete the generation_config in the model if you use that

This also works for me but why may be the reason for that?

_ have the same issue and I wonder what is the reason?_

@Vital1162
Copy link

Vital1162 commented Mar 5, 2024

This problem happened when ```max_output_tokens''' was too small for the response so no need to delete generation_config. That my experience

@HienBM
Copy link
Author

HienBM commented Mar 5, 2024

I think the main reason is the model doesn't return any text in sometimes, based on the explanation of @MarkDaoust in
#196 (comment)

I still get this error with my code even when I delete generation_config.

But when I set up

for candidate in response.candidates:
    return [part.text for part in candidate.content.parts]

instead of
response.text

This error does not appear anymore.

@Bill-A
Copy link

Bill-A commented Mar 29, 2024

@HienBM, Thank you for this information. It resolved my errors.

@codewithdark-git
Copy link

codewithdark-git commented Mar 31, 2024

ValueError: The response.text quick accessor only works for simple (single-Part) text responses. This response is not simple text.Use the result.parts accessor or the full result.candidates[index].content.parts lookup instead.

Fix the Error Anyone check for You Code it's working for me 😎
when you work on only text

*model = genai.GenerativeModel('gemini-pro')
prompt = "What is the meaning of life?"
response = model.generate_content(prompt)
specific_answer = response.candidates[0].content.parts[0].text
    
print(specific_answer)*

and when work with image


response = model.generate_content(img)

try:
    # Check if 'candidates' list is not empty
    if response.candidates:
        # Access the first candidate's content if available
        if response.candidates[0].content.parts:
            generated_text = response.candidates[0].content.parts[0].text
            print("Generated Text:", generated_text)
        else:
            print("No generated text found in the candidate.")
    else:
        print("No candidates found in the response.")
except (AttributeError, IndexError) as e:
    print("Error:", e)

@0706020994
Copy link

You could try to delete max_output_tokens generation_config in the model if you use that

thankyou very much

@genicsoft
Copy link

genicsoft commented Apr 11, 2024

Maybe it's because multiple results are generated
When there is only one result, it is no problem to directly
respond.text
But if there are multiple results, an error will be reported. Then you need to use the first result by default.
That is
response.candidates[0].content.parts[0].text

@chrbsg
Copy link

chrbsg commented Apr 11, 2024

Maybe it's because multiple results are generated

res.candidates[0].content.parts was an empty list in my case, and res.candidates[0].finish_reason was MAX_TOKENS.

My current diagnosis of this issue is that max_output_tokens is treated oddly by this generative-ai-python SDK. It appears that max_output_tokens is an absolute upper limit for a reply, and when this limit is reached, the SDK returns an empty reply with finish_reason=MAX_TOKENS. This is confirmed by the doc string for the error code:

MAX_TOKENS (2): The maximum number of tokens as specified in the request was reached.

So a lower max_output_tokens value will result in no text and a MAX_TOKENS error. This is different to the behaviour that most people expect, which would be to return some text when the upper limit is hit. e.g. in Vertex AI, max_output_tokens is interpreted as a length modifier, where a lower value results in shorter text responses (not no text):

MAX_OUTPUT_TOKENS: Maximum number of tokens that can be generated in the response. A token is approximately four characters. 100 tokens correspond to roughly 60-80 words. Specify a lower value for shorter responses and a higher value for potentially longer responses.

@AdarshChintada
Copy link

AdarshChintada commented Apr 16, 2024

del('gemini-pro-vision'

Thank you for your answer @HienBM But I just used model = genai.GenerativeModel('gemini-pro-vision') to simply set up the genimi-pro-vision model. And I encountered the same problem.

Cell In[31], line 23, in dectect_object_why(ori_img, head_pos, gaze_pos)
      8 response = model.generate_content(
      9     [
     10         "The person outlined in the blue frame is looking at what object is marked with the red circle in the picture?",
   (...)
     20     stream=True
     21 )
     22 response.resolve()
---> 23 to_markdown(response.text)
     24 return response.text

File ~/miniconda3/envs/gemini/lib/python3.9/site-packages/google/generativeai/types/generation_types.py:328, in BaseGenerateContentResponse.text(self)
    326 parts = self.parts
    327 if len(parts) != 1 or "text" not in parts[0]:
--> 328     raise ValueError(
    329         "The `response.text` quick accessor only works for "
    330         "simple (single-`Part`) text responses. This response is not simple text."
    331         "Use the `result.parts` accessor or the full "
    332         "`result.candidates[index].content.parts` lookup "
    333         "instead."
    334     )
    335 return parts[0].text

ValueError: The `response.text` quick accessor only works for simple (single-`Part`) text responses. This response is not simple text.Use the `result.parts` accessor or the full `result.candidates[index].content.parts` lookup instead.

I don't know how to solve this problem. But this problem does not occur when I use other image examples to input the model.

@Ki-Zhang

The problem occurs not only with the image but also the prompt. I tried same image with different prompt and it works.
If you do not want to change the prompt then block the safety settings like below:

def get_gemini_response(input, image):
model = genai.GenerativeModel('gemini-pro-vision')
safe = [
{
"category": "HARM_CATEGORY_DANGEROUS",
"threshold": "BLOCK_NONE",
},
{
"category": "HARM_CATEGORY_HARASSMENT",
"threshold": "BLOCK_NONE",
},
{
"category": "HARM_CATEGORY_HATE_SPEECH",
"threshold": "BLOCK_NONE",
},
{
"category": "HARM_CATEGORY_SEXUALLY_EXPLICIT",
"threshold": "BLOCK_NONE",
},
{
"category": "HARM_CATEGORY_DANGEROUS_CONTENT",
"threshold": "BLOCK_NONE",
},
]
if input != "":
response = model.generate_content([input, image], safety_settings=safe)
else:
response = model.generate_content(image)
return response.text

I hope it definitely works. Thank you!

Sign up for free to join this conversation on GitHub. Already have an account? Sign in to comment
Labels
component:python sdk Issue/PR related to Python SDK type:bug Something isn't working
Projects
None yet
Development

No branches or pull requests