-
-
Notifications
You must be signed in to change notification settings - Fork 3.8k
Open
Labels
Description
What happened?
Starting with 1.69.1, I'm unable to include an image input into a request for Mistral models. This works on previous versions without any issues and I've confirmed the issue exists in 1.72.2. It also doesn't seem to extend to other model families. Would appreciate any help on this, thanks!
req = {
'messages':
[
{
'content':
[
{
'text': 'Was an image passed in, yes or no?',
'type': 'text'
},
{
'image_url': {
'url': 'data:image/png;base64,R0lGODlhAQABAIAAAAAAAP///yH5BAEAAAAALAAAAAABAAEAAAIBRAA7',
'detail': 'auto'
},
'type': 'image_url'
}
],
'role': 'user'
}
],
'model': 'mistral-medium-2505',
'n': 1,
'custom_llm_provider': 'mistral',
'max_completion_tokens': 256,
'api_key': 'XXXXXX'
}
await litellm.acompletion(**req)
Relevant log output
---------------------------------------------------------------------------
TypeError Traceback (most recent call last)
File .../litellm/main.py:508, in acompletion(model, messages, functions, function_call, timeout, temperature, top_p, n, stream, stream_options, stop, max_tokens, max_completion_tokens, modalities, prediction, audio, presence_penalty, frequency_penalty, logit_bias, user, response_format, seed, tools, tool_choice, parallel_tool_calls, logprobs, top_logprobs, deployment_id, reasoning_effort, base_url, api_version, api_key, model_list, extra_headers, thinking, **kwargs)
507 elif asyncio.iscoroutine(init_response):
--> 508 response = await init_response
509 else:
File .../litellm/llms/openai/openai.py:765, in OpenAIChatCompletion.acompletion(self, messages, optional_params, litellm_params, provider_config, model, model_response, logging_obj, timeout, api_key, api_base, api_version, organization, client, max_retries, headers, drop_params, stream_options, fake_stream)
764 response = None
--> 765 data = await provider_config.async_transform_request(
766 model=model,
767 messages=messages,
768 optional_params=optional_params,
769 litellm_params=litellm_params,
770 headers=headers or {},
771 )
772 for _ in range(
773 2
774 ): # if call fails due to alternating messages, retry with reformatted message
.../litellm/llms/openai/chat/gpt_transformation.py:400, in OpenAIGPTConfig.async_transform_request(self, model, messages, optional_params, litellm_params, headers)
392 async def async_transform_request(
393 self,
394 model: str,
(...) 398 headers: dict,
399 ) -> dict:
--> 400 transformed_messages = await self._transform_messages(
401 messages=messages, model=model, is_async=True
402 )
404 return {
405 "model": model,
406 "messages": transformed_messages,
407 **optional_params,
408 }
TypeError: object list can't be used in 'await' expression
Are you a ML Ops Team?
Yes
What LiteLLM version are you on ?
v1.72.2
Twitter / LinkedIn details
No response