You signed in with another tab or window. Reload to refresh your session.You signed out in another tab or window. Reload to refresh your session.You switched accounts on another tab or window. Reload to refresh your session.Dismiss alert
I am creating a program using the AzureAIInferenceChatClient, and there was an issue.
The Mistral-large-2407 is working normally, but the Cohere-command-r-plus-08-2024 throws an exception.
I prepared a short sample for verification.
According to the sample, when CompleteAsync() is executed, "400 (Bad Request)" is returned.
Sample Code:
usingAzure;usingAzure.AI.Inference;usingMicrosoft.Extensions.AI;string?AZURE_COHERE_ENDPOINT=Environment.GetEnvironmentVariable("AZURE_COHERE_ENDPOINT");string?AZURE_COHERE_API_KEY=Environment.GetEnvironmentVariable("AZURE_COHERE_API_KEY");string?modelId=Environment.GetEnvironmentVariable("AZURE_COHERE_MODELID");ArgumentNullException.ThrowIfNull(AZURE_COHERE_ENDPOINT);ArgumentNullException.ThrowIfNull(AZURE_COHERE_API_KEY);ArgumentNullException.ThrowIfNull(modelId);Uriendpoint=new(AZURE_COHERE_ENDPOINT);AzureKeyCredentialcredential=new(AZURE_COHERE_API_KEY);ChatCompletionsClientchatCompletionsClient=new(endpoint,credential);Response<ModelInfo>modelInfo=chatCompletionsClient.GetModelInfo();Console.WriteLine($"Model name: {modelInfo.Value.ModelName}");Console.WriteLine($"Model type: {modelInfo.Value.ModelType}");Console.WriteLine($"Model provider name: {modelInfo.Value.ModelProviderName}");try{AzureAIInferenceChatClientclient=new(chatCompletionsClient,modelId);ChatCompletionresult=awaitclient.CompleteAsync("What is AI?");Console.WriteLine(result);}catch(Exceptionex){Console.WriteLine(ex);}
Message printed to the console:
Model name: Cohere Command R+
Model type: chat-completion
Model provider name: Cohere
Azure.RequestFailedException: {"message":"invalid type: parameter messages.content is of type array but should be of type string"}
Status: 400 (Bad Request)
ErrorCode: Bad Request
Content:
{"error":{"code":"Bad Request","message":"{\"message\":\"invalid type: parameter messages.content is of type array but should be of type string\"}","status":400}}
Headers:
Cache-Control: no-cache, no-store, no-transform, must-revalidate, private, max-age=0
Pragma: no-cache
Vary: REDACTED
x-accel-expires: REDACTED
x-ms-rai-invoked: REDACTED
X-Request-ID: REDACTED
ms-azureml-model-error-reason: REDACTED
ms-azureml-model-error-statuscode: REDACTED
x-ms-client-request-id: 4053b640-fc4a-4be5-ab2d-bb183756abd2
Request-Context: REDACTED
azureml-model-session: REDACTED
Date: Tue, 26 Nov 2024 23:49:27 GMT
Content-Length: 162
Content-Type: application/json
Expires: Thu, 01 Jan 1970 00:00:00 UTC
at Azure.Core.HttpPipelineExtensions.ProcessMessageAsync(HttpPipeline pipeline, HttpMessage message, RequestContext requestContext, CancellationToken cancellationToken)
at Azure.AI.Inference.ChatCompletionsClient.CompleteAsync(RequestContent content, String extraParams, RequestContext context)
at Azure.AI.Inference.ChatCompletionsClient.CompleteAsync(ChatCompletionsOptions chatCompletionsOptions, CancellationToken cancellationToken)
at Microsoft.Extensions.AI.AzureAIInferenceChatClient.CompleteAsync(IList`1 chatMessages, ChatOptions options, CancellationToken cancellationToken)
at Program.<Main>$(String[] args) in C:\teacup\source\repos\AzureAIInferenceExample\Program.cs:line 27
Is Command R+ functioning correctly?
It has been confirmed to function correctly by generating JSON on its own in the following manner.
However, this method does not allow us to take advantage of the benefits of Microsoft.Extensions.AI.
Sample Code used for validation:
usingNewtonsoft.Json;usingSystem.Net.Http.Headers;usingSystem.Text;string?AZURE_COHERE_ENDPOINT=Environment.GetEnvironmentVariable("AZURE_COHERE_ENDPOINT");string?AZURE_COHERE_API_KEY=Environment.GetEnvironmentVariable("AZURE_COHERE_API_KEY");string?modelId=Environment.GetEnvironmentVariable("AZURE_COHERE_MODELID");ArgumentNullException.ThrowIfNull(AZURE_COHERE_ENDPOINT);ArgumentNullException.ThrowIfNull(AZURE_COHERE_API_KEY);ArgumentNullException.ThrowIfNull(modelId);Uriendpoint=new(AZURE_COHERE_ENDPOINT+"/chat/completions");usingHttpClientclient=new();client.DefaultRequestHeaders.Authorization=newAuthenticationHeaderValue("Bearer",AZURE_COHERE_API_KEY);client.DefaultRequestHeaders.Accept.Add(newMediaTypeWithQualityHeaderValue("application/json"));varrequestContent=new{model=modelId,messages=new[]{new{role="user",content="What is AI?"}}};stringjsonContent=JsonConvert.SerializeObject(requestContent);StringContentcontent=new(jsonContent,Encoding.UTF8,"application/json");try{HttpResponseMessageresponse=awaitclient.PostAsync(endpoint,content);response.EnsureSuccessStatusCode();stringresponseString=awaitresponse.Content.ReadAsStringAsync();Console.WriteLine(responseString);}catch(Exceptionex){Console.WriteLine(ex);}
Assumption
Unfortunately, I don't have an environment to build and verify on hand, but by checking the source code, the following issues are apparent.
It seems that Microsoft.Extensions.AI.AzureAIInferenceChatClient is not covering the specification differences of Azure.AI.Inference.ChatRequestUserMessage.
Microsoft.Extensions.AI.AzureAIInferenceChatClient:
It is functionally managed uniformly with List.
When there are multiple texts:
Uses the provided IList contents.
When there is a single text:
Creates List(1) and contains new TextContent(content) in it.
Azure.AI.Inference.ChatRequestUserMessage:
It internally has two parts, string Content and IList MultimodalContentItems, which are mutually exclusive, and only one of them is used.
There are two constructors, and each has a different initialization method.
When there are multiple texts:
Write with Utf8JsonWriter.WriteStartArray() and Utf8JsonWriter.WriteEndArray() from MultimodalContentItems.
When there is a single text:
Since MultimodalContentItems == null, it writes with Utf8JsonWriter.WriteStringValue() from Content.
Reproduction Steps
Create a console application with any name.
Install Azure.AI.Inference, Microsoft.Extensions.AI, and Microsoft.Extensions.AI.AzureAIInference from NuGet.
The latest versions are 1.0.0-beta.2 and 9.0.1-preview.1.24570.5.
Copy and paste the source code, and appropriately modify AZURE_COHERE_ENDPOINT, AZURE_COHERE_API_KEY, and modelId.
Run it.
Check the console screen.
usingAzure;usingAzure.AI.Inference;usingMicrosoft.Extensions.AI;string?AZURE_COHERE_ENDPOINT=Environment.GetEnvironmentVariable("AZURE_COHERE_ENDPOINT");string?AZURE_COHERE_API_KEY=Environment.GetEnvironmentVariable("AZURE_COHERE_API_KEY");string?modelId=Environment.GetEnvironmentVariable("AZURE_COHERE_MODELID");ArgumentNullException.ThrowIfNull(AZURE_COHERE_ENDPOINT);ArgumentNullException.ThrowIfNull(AZURE_COHERE_API_KEY);ArgumentNullException.ThrowIfNull(modelId);Uriendpoint=new(AZURE_COHERE_ENDPOINT);AzureKeyCredentialcredential=new(AZURE_COHERE_API_KEY);ChatCompletionsClientchatCompletionsClient=new(endpoint,credential);Response<ModelInfo>modelInfo=chatCompletionsClient.GetModelInfo();Console.WriteLine($"Model name: {modelInfo.Value.ModelName}");Console.WriteLine($"Model type: {modelInfo.Value.ModelType}");Console.WriteLine($"Model provider name: {modelInfo.Value.ModelProviderName}");try{AzureAIInferenceChatClientclient=new(chatCompletionsClient,modelId);ChatCompletionresult=awaitclient.CompleteAsync("What is AI?");Console.WriteLine(result);}catch(Exceptionex){Console.WriteLine(ex);}
Expected behavior
Below is the normal output for Mistral Large.
It is expected to be displayed similarly with Command R+.
Model name: mistral-large-2407
Model type: chat-completion
Model provider name: Mistral
AI, or Artificial Intelligence, refers to the simulation of human intelligence processes by machines, especially computer systems. These processes include learning (acquiring information and rules for using the information), reasoning (using the rules to reach approximate or definite conclusions), and self-correction. Here are some key aspects of AI:
1. **Machine Learning (ML)**: A subset of AI that involves training algorithms to learn from data, make predictions, or decisions without being explicitly programmed.
2. **Deep Learning (DL)**: A subset of machine learning that uses neural networks with many layers to learn and make decisions on data.
3. **Natural Language Processing (NLP)**: A branch of AI that deals with the interaction between computers and humans through natural language.
4. **Computer Vision**: A field of AI that focuses on enabling computers to interpret and understand the visual world, e.g., recognizing objects in images or videos.
5. **Robotics**: A domain that involves designing, building, and operating robots, often incorporating AI for control, perception, and cognition.
AI applications are vast and include search algorithms, recommendation systems, voice assistants, fraud detection, autonomous vehicles, and much more. The goal of AI is to create systems that can function intelligently and independently, mimicking human-like intelligence.
Actual behavior
Model name: Cohere Command R+
Model type: chat-completion
Model provider name: Cohere
Azure.RequestFailedException: {"message":"invalid type: parameter messages.content is of type array but should be of type string"}
Status: 400 (Bad Request)
ErrorCode: Bad Request
Content:
{"error":{"code":"Bad Request","message":"{\"message\":\"invalid type: parameter messages.content is of type array but should be of type string\"}","status":400}}
Headers:
Cache-Control: no-cache, no-store, no-transform, must-revalidate, private, max-age=0
Pragma: no-cache
Vary: REDACTED
x-accel-expires: REDACTED
x-ms-rai-invoked: REDACTED
X-Request-ID: REDACTED
ms-azureml-model-error-reason: REDACTED
ms-azureml-model-error-statuscode: REDACTED
x-ms-client-request-id: 4053b640-fc4a-4be5-ab2d-bb183756abd2
Request-Context: REDACTED
azureml-model-session: REDACTED
Date: Tue, 26 Nov 2024 23:49:27 GMT
Content-Length: 162
Content-Type: application/json
Expires: Thu, 01 Jan 1970 00:00:00 UTC
at Azure.Core.HttpPipelineExtensions.ProcessMessageAsync(HttpPipeline pipeline, HttpMessage message, RequestContext requestContext, CancellationToken cancellationToken)
at Azure.AI.Inference.ChatCompletionsClient.CompleteAsync(RequestContent content, String extraParams, RequestContext context)
at Azure.AI.Inference.ChatCompletionsClient.CompleteAsync(ChatCompletionsOptions chatCompletionsOptions, CancellationToken cancellationToken)
at Microsoft.Extensions.AI.AzureAIInferenceChatClient.CompleteAsync(IList`1 chatMessages, ChatOptions options, CancellationToken cancellationToken)
at Program.<Main>$(String[] args) in C:\teacup\source\repos\AzureAIInferenceExample\Program.cs:line 27
Regression?
No response
Known Workarounds
No response
Configuration
No response
Other information
No response
The text was updated successfully, but these errors were encountered:
Description
I am creating a program using the AzureAIInferenceChatClient, and there was an issue.
The Mistral-large-2407 is working normally, but the Cohere-command-r-plus-08-2024 throws an exception.
I prepared a short sample for verification.
According to the sample, when CompleteAsync() is executed, "400 (Bad Request)" is returned.
Sample Code:
Message printed to the console:
Is Command R+ functioning correctly?
It has been confirmed to function correctly by generating JSON on its own in the following manner.
However, this method does not allow us to take advantage of the benefits of Microsoft.Extensions.AI.
Sample Code used for validation:
Assumption
Unfortunately, I don't have an environment to build and verify on hand, but by checking the source code, the following issues are apparent.
It seems that Microsoft.Extensions.AI.AzureAIInferenceChatClient is not covering the specification differences of Azure.AI.Inference.ChatRequestUserMessage.
Microsoft.Extensions.AI.AzureAIInferenceChatClient:
It is functionally managed uniformly with List.
Azure.AI.Inference.ChatRequestUserMessage:
It internally has two parts, string Content and IList MultimodalContentItems, which are mutually exclusive, and only one of them is used.
There are two constructors, and each has a different initialization method.
Reproduction Steps
The latest versions are 1.0.0-beta.2 and 9.0.1-preview.1.24570.5.
Expected behavior
Below is the normal output for Mistral Large.
It is expected to be displayed similarly with Command R+.
Actual behavior
Regression?
No response
Known Workarounds
No response
Configuration
No response
Other information
No response
The text was updated successfully, but these errors were encountered: