You signed in with another tab or window. Reload to refresh your session.You signed out in another tab or window. Reload to refresh your session.You switched accounts on another tab or window. Reload to refresh your session.Dismiss alert
Request to litellm:
litellm.completion(model='groq/llama-3.1-70b-versatile', messages=[{'role': 'user', 'content': 'List 5 important events in the 21 century'}], response_format=<class '__main__.CalendarEvent'>)
19:28:18 - LiteLLM:WARNING: utils.py:290 - `litellm.set_verbose` is deprecated. Please set`os.environ['LITELLM_LOG'] = 'DEBUG'`for debug logs.
SYNC kwargs[caching]: False; litellm.cache: None; kwargs.get('cache')['no-cache']: False
Final returned optional params: {'response_format': <class '__main__.CalendarEvent'>, 'extra_body': {}}
POST Request Sent from LiteLLM:
curl -X POST \
https://api.groq.com/openai/v1/ \
-d '{'model': 'llama-3.1-70b-versatile', 'messages': [{'role': 'user', 'content': 'List 5 important events in the 21 century'}], 'response_format': <class '__main__.CalendarEvent'>, 'extra_body': {}}'
openai.py: Received openai error - You tried to pass a `BaseModel` class to `chat.completions.create()`; You must use `beta.chat.completions.parse()` insteadGive Feedback / Get Help: https://github.com/BerriAI/litellm/issues/newLiteLLM.Info: If you need to debug this error, use `litellm.set_verbose=True'.Provider List: https://docs.litellm.ai/docs/providersTraceback (most recent call last): File "/home/t/atest/use_computer/.venv/lib/python3.11/site-packages/litellm/llms/OpenAI/openai.py", line 854, in completion raise e File "/home/t/atest/use_computer/.venv/lib/python3.11/site-packages/litellm/llms/OpenAI/openai.py", line 790, in completion self.make_sync_openai_chat_completion_request( File "/home/t/atest/use_computer/.venv/lib/python3.11/site-packages/litellm/llms/OpenAI/openai.py", line 651, in make_sync_openai_chat_completion_request raise e File "/home/t/atest/use_computer/.venv/lib/python3.11/site-packages/litellm/llms/OpenAI/openai.py", line 633, in make_sync_openai_chat_completion_request raw_response = openai_client.chat.completions.with_raw_response.create( ^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^ File "/home/t/atest/use_computer/.venv/lib/python3.11/site-packages/openai/_legacy_response.py", line 356, in wrapped return cast(LegacyAPIResponse[R], func(*args, **kwargs)) ^^^^^^^^^^^^^^^^^^^^^ File "/home/t/atest/use_computer/.venv/lib/python3.11/site-packages/openai/_utils/_utils.py", line 275, in wrapper return func(*args, **kwargs) ^^^^^^^^^^^^^^^^^^^^^ File "/home/t/atest/use_computer/.venv/lib/python3.11/site-packages/openai/resources/chat/completions.py", line 828, in create validate_response_format(response_format) File "/home/t/atest/use_computer/.venv/lib/python3.11/site-packages/openai/resources/chat/completions.py", line 1744, in validate_response_format raise TypeError(TypeError: You tried to pass a `BaseModel` class to `chat.completions.create()`; You must use `beta.chat.completions.parse()` insteadDuring handling of the above exception, another exception occurred:Traceback (most recent call last): File "/home/t/atest/use_computer/.venv/lib/python3.11/site-packages/litellm/main.py", line 1483, in completion response = groq_chat_completions.completion( ^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^ File "/home/t/atest/use_computer/.venv/lib/python3.11/site-packages/litellm/llms/groq/chat/handler.py", line 41, in completion return super().completion( ^^^^^^^^^^^^^^^^^^^ File "/home/t/atest/use_computer/.venv/lib/python3.11/site-packages/litellm/llms/OpenAI/openai.py", line 864, in completion raise OpenAIError(litellm.llms.OpenAI.openai.OpenAIError: You tried to pass a `BaseModel` class to `chat.completions.create()`; You must use `beta.chat.completions.parse()` insteadDuring handling of the above exception, another exception occurred:Traceback (most recent call last): File "/home/t/atest/use_computer/test.py", line 110, in <module> resp = litellm.completion( ^^^^^^^^^^^^^^^^^^^ File "/home/t/atest/use_computer/.venv/lib/python3.11/site-packages/litellm/utils.py", line 960, in wrapper raise e File "/home/t/atest/use_computer/.venv/lib/python3.11/site-packages/litellm/utils.py", line 849, in wrapper result = original_function(*args, **kwargs) ^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^ File "/home/t/atest/use_computer/.venv/lib/python3.11/site-packages/litellm/main.py", line 3060, in completion raise exception_type( ^^^^^^^^^^^^^^^ File "/home/t/atest/use_computer/.venv/lib/python3.11/site-packages/litellm/litellm_core_utils/exception_mapping_utils.py", line 2136, in exception_type raise e File "/home/t/atest/use_computer/.venv/lib/python3.11/site-packages/litellm/litellm_core_utils/exception_mapping_utils.py", line 404, in exception_type raise APIError(litellm.exceptions.APIError: litellm.APIError: APIError: GroqException - You tried to pass a `BaseModel` class to `chat.completions.create()`; You must use `beta.chat.completions.parse()` instead
Output: for groq/llama-3.2-11b-vision-preview -> error
Request to litellm:
litellm.completion(model='groq/llama-3.2-11b-vision-preview', messages=[{'role': 'user', 'content': 'List 5 important events in the 21 century'}], response_format=<class '__main__.CalendarEvent'>)
19:30:42 - LiteLLM:WARNING: utils.py:290 - `litellm.set_verbose` is deprecated. Please set`os.environ['LITELLM_LOG'] = 'DEBUG'`for debug logs.
SYNC kwargs[caching]: False; litellm.cache: None; kwargs.get('cache')['no-cache']: False
Final returned optional params: {'response_format': <class '__main__.CalendarEvent'>, 'extra_body': {}}
POST Request Sent from LiteLLM:
curl -X POST \
https://api.groq.com/openai/v1/ \
-d '{'model': 'llama-3.2-11b-vision-preview', 'messages': [{'role': 'user', 'content': 'List 5 important events in the 21 century'}], 'response_format': <class '__main__.CalendarEvent'>, 'extra_body': {}}'
openai.py: Received openai error - You tried to pass a `BaseModel` class to `chat.completions.create()`; You must use `beta.chat.completions.parse()` insteadGive Feedback / Get Help: https://github.com/BerriAI/litellm/issues/newLiteLLM.Info: If you need to debug this error, use `litellm.set_verbose=True'.Provider List: https://docs.litellm.ai/docs/providersTraceback (most recent call last): File "/home/t/atest/use_computer/.venv/lib/python3.11/site-packages/litellm/llms/OpenAI/openai.py", line 854, in completion raise e File "/home/t/atest/use_computer/.venv/lib/python3.11/site-packages/litellm/llms/OpenAI/openai.py", line 790, in completion self.make_sync_openai_chat_completion_request( File "/home/t/atest/use_computer/.venv/lib/python3.11/site-packages/litellm/llms/OpenAI/openai.py", line 651, in make_sync_openai_chat_completion_request raise e File "/home/t/atest/use_computer/.venv/lib/python3.11/site-packages/litellm/llms/OpenAI/openai.py", line 633, in make_sync_openai_chat_completion_request raw_response = openai_client.chat.completions.with_raw_response.create( ^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^ File "/home/t/atest/use_computer/.venv/lib/python3.11/site-packages/openai/_legacy_response.py", line 356, in wrapped return cast(LegacyAPIResponse[R], func(*args, **kwargs)) ^^^^^^^^^^^^^^^^^^^^^ File "/home/t/atest/use_computer/.venv/lib/python3.11/site-packages/openai/_utils/_utils.py", line 275, in wrapper return func(*args, **kwargs) ^^^^^^^^^^^^^^^^^^^^^ File "/home/t/atest/use_computer/.venv/lib/python3.11/site-packages/openai/resources/chat/completions.py", line 828, in create validate_response_format(response_format) File "/home/t/atest/use_computer/.venv/lib/python3.11/site-packages/openai/resources/chat/completions.py", line 1744, in validate_response_format raise TypeError(TypeError: You tried to pass a `BaseModel` class to `chat.completions.create()`; You must use `beta.chat.completions.parse()` insteadDuring handling of the above exception, another exception occurred:Traceback (most recent call last): File "/home/t/atest/use_computer/.venv/lib/python3.11/site-packages/litellm/main.py", line 1483, in completion response = groq_chat_completions.completion( ^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^ File "/home/t/atest/use_computer/.venv/lib/python3.11/site-packages/litellm/llms/groq/chat/handler.py", line 41, in completion return super().completion( ^^^^^^^^^^^^^^^^^^^ File "/home/t/atest/use_computer/.venv/lib/python3.11/site-packages/litellm/llms/OpenAI/openai.py", line 864, in completion raise OpenAIError(litellm.llms.OpenAI.openai.OpenAIError: You tried to pass a `BaseModel` class to `chat.completions.create()`; You must use `beta.chat.completions.parse()` insteadDuring handling of the above exception, another exception occurred:Traceback (most recent call last): File "/home/t/atest/use_computer/test.py", line 110, in <module> resp = litellm.completion( ^^^^^^^^^^^^^^^^^^^ File "/home/t/atest/use_computer/.venv/lib/python3.11/site-packages/litellm/utils.py", line 960, in wrapper raise e File "/home/t/atest/use_computer/.venv/lib/python3.11/site-packages/litellm/utils.py", line 849, in wrapper result = original_function(*args, **kwargs) ^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^ File "/home/t/atest/use_computer/.venv/lib/python3.11/site-packages/litellm/main.py", line 3060, in completion raise exception_type( ^^^^^^^^^^^^^^^ File "/home/t/atest/use_computer/.venv/lib/python3.11/site-packages/litellm/litellm_core_utils/exception_mapping_utils.py", line 2136, in exception_type raise e File "/home/t/atest/use_computer/.venv/lib/python3.11/site-packages/litellm/litellm_core_utils/exception_mapping_utils.py", line 404, in exception_type raise APIError(litellm.exceptions.APIError: litellm.APIError: APIError: GroqException - You tried to pass a `BaseModel` class to `chat.completions.create()`; You must use `beta.chat.completions.parse()` instead
The text was updated successfully, but these errors were encountered:
litellm might be failing to parse
pydantic.BaseModel
for response_formatusing litellm = "1.52.10"
code i used
Output: for
gemini/gemini-1.5-pro
-> okOutput: for
gemini/gemini-1.5-flash
-> okOutput: for
groq/llama-3.1-70b-versatile
-> errorOutput: for
groq/llama-3.2-11b-vision-preview
-> errorThe text was updated successfully, but these errors were encountered: