Closed
Description
Assume there are two plugin functions: Load and Save.
[KernelFunction]
[Description("Loads the content from the given file.")]
public async Task<string> LoadAsync([Description("The file to be loaded.")] string file);
[KernelFunction]
[Description("Saves the given text to a file.")]
public async Task<string> SaveAsync(
[Description("The text to be saved.")] string text,
[Description("Where to save the text.")] string? file=null)
Both functions are correctly invoked on appropriate prompts.
Imagine now prompt like following one:
"Load the text from file 'abc.txt' and then save it to file cde.txt".
This should orchestrate the first LoadAsync function and then SaveAsync function. This does not really happen. LoadAsync is always invoked, while SaveAsync is sometimes invoked only after several minutes of waiting, if it is invoked at all.
To investigate the issue I have implemented the streaming like shown below:
var stream = chatCompletionService.GetStreamingChatMessageContentsAsync(
chatHistory,
executionSettings: settings,
kernel: kernel);
await foreach (var message in stream)
{
if (message.Content?.Length > 0)
{
result.AppendLine(message.Content);
Console.Write(message.Content);
}
else if (message.Items.Count > 0)
{
foreach (var item in message.Items)
{
**Console.Write("."); <------------**
}
}
}
The condition message.Items.Count > 0 is continuously invoked, passing very short chunks of text as arguments.
.. and so on..
Platform
- Language: C#
- Source:
<PackageReference Include="Microsoft.SemanticKernel.Connectors.AzureOpenAI" Version="1.47.0" />
<PackageReference Include="Microsoft.SemanticKernel.Connectors.OpenAI" Version="1.47.0" />
<PackageReference Include="Microsoft.SemanticKernel.Process.Core" Version="1.47.0-alpha" />
<PackageReference Include="Microsoft.SemanticKernel.Process.LocalRuntime" Version="1.47.0-alpha" />
- AI model: OpenAI:GPT-4 Azure GPT-4o]
Metadata
Metadata
Assignees
Type
Projects
Status
Bug