Description
Describe the bug
Declaring the response_format in a yaml file does not work with AzureAIInference.
To Reproduce
Steps to reproduce the behavior:
Create a yaml function:
name: DummyPrompt
description: hello
template_format: jinja2
execution_settings:
azure_ai_inference:
max_tokens: 100
temperature: 0.0
response_format:
type: json_schema
json_schema:
name: dummy_format
schema:
type: object
properties:
message: {type: string}
template: |
<message role="system">
Your goal is to return the message of the user as JSON.
</message>
<message role="user">
{{ message }}
</message>
Try to invoke the function with AzureAIInferenceChatCompletion.
SemanticKernel, and more specificaly AzureAIInferenceChatPromptExecutionSettings expects that response_format to be {"type": "json_schema", "json_schema": {}} link
The problem lies in the fact that the response_format is passed directly to AzureAIInferenceChatCompletion
Therefore, the JsonSchemaFormat cannot be created because it contains {"type": "json_schema", "json_schema": {}} instead of the content of json_schema
Expected behavior
The json_schema is correctly passed when read from dict.
Platform
- Language: Python
- Source: semantic-kernel==1.32.0 azure-ai-inference==1.0.0b9
- AI model: N/A, the problem arise before actually calling the model
Additional context
In the code:
# Case 4: response_format is a dictionary (legacy), create JsonSchemaFormat from dict
Does it mean that using YAML isn't intended anymore, or am I missing something?
Metadata
Metadata
Assignees
Labels
Type
Projects
Status