Few shot prompting not working for function calling - getting error #6801
-
Hi We are using SK 1.13, gpt-4-turbo, Native Plugins. Our thought is to go to a plugin for contextual data, i.e. data that can be deduced by the LLM based on data in chat history. As an example we have plugin 1 and function 1. Function to an api to retrieve json data. Lets assume that the function calculates the weather data for a small city in Boston. Lets assume that the LLM does not have this data. So the user prompt is -> Get me the weather for Boston. In the Plugin1, function 1 call, we call an api to go fetch the weather for Boston and will have the following entries. This will have system prompt, user prompt, Assistant Role entry, Tool Role entry in the kernel chat history. I saw this example https://community.openai.com/t/few-shot-and-function-calling/265908/2 and tried to provide examples and adding them the chat history by creating the following in chat history. This should have an entry in the chat history that will tell the LLM that when the same prompt is called for the data that already can be deduced from the prior chat history, call another plugin and function with the already deduced data. System Prompt, User Prompt, Assistant Role entry, Tool Role Entry (This will be the weather in Boston first response) We are getting the error below Content: Please help to let us know what we are doing wrong here Thanks |
Beta Was this translation helpful? Give feedback.
Replies: 2 comments 1 reply
-
Hi @EVENFLOW212 thanks for the question. I put together a sample to try out your scenario, here's the code public sealed class OpenAI_RepeatedFunctionCalling(ITestOutputHelper output) : BaseTest(output)
{
/// <summary>
/// Sample shows how to reuse a function result from the chat history.
/// </summary>
[Fact]
public async Task ReuseFunctionResultExecutionAsync()
{
// Create a kernel with OpenAI chat completion and WeatherPlugin
Kernel kernel = CreateKernelWithPlugin<WeatherPlugin>();
var service = kernel.GetRequiredService<IChatCompletionService>();
// Invoke chat prompt with auto invocation of functions enabled
var chatHistory = new ChatHistory
{
new ChatMessageContent(AuthorRole.User, "What is the weather like in Boston?")
};
var executionSettings = new OpenAIPromptExecutionSettings { ToolCallBehavior = ToolCallBehavior.AutoInvokeKernelFunctions };
var result1 = await service.GetChatMessageContentAsync(chatHistory, executionSettings, kernel);
chatHistory.Add(result1);
Console.WriteLine(result1);
chatHistory.Add(new ChatMessageContent(AuthorRole.User, "What is the weather like in Paris?"));
var result2 = await service.GetChatMessageContentAsync(chatHistory, executionSettings, kernel);
chatHistory.Add(result2);
Console.WriteLine(result2);
chatHistory.Add(new ChatMessageContent(AuthorRole.User, "What is the weather like in Dublin?"));
var result3 = await service.GetChatMessageContentAsync(chatHistory, executionSettings, kernel);
chatHistory.Add(result3);
Console.WriteLine(result3);
chatHistory.Add(new ChatMessageContent(AuthorRole.User, "What is the weather like in Boston?"));
var result4 = await service.GetChatMessageContentAsync(chatHistory, executionSettings, kernel);
chatHistory.Add(result4);
Console.WriteLine(result4);
}
private sealed class WeatherPlugin
{
[KernelFunction]
[Description("Get the current weather in a given location.")]
public string GetWeather(
[Description("The city and department, e.g. Marseille, 13")] string location
) => $"12°C\nWind: 11 KMPH\nHumidity: 48%\nMostly cloudy\nLocation: {location}";
}
private Kernel CreateKernelWithPlugin<T>()
{
// Create a logging handler to output HTTP requests and responses
var handler = new LoggingHandler(new HttpClientHandler(), this.Output);
HttpClient httpClient = new(handler);
// Create a kernel with OpenAI chat completion and WeatherPlugin
IKernelBuilder kernelBuilder = Kernel.CreateBuilder();
kernelBuilder.AddOpenAIChatCompletion(
modelId: TestConfiguration.OpenAI.ChatModelId!,
apiKey: TestConfiguration.OpenAI.ApiKey!,
httpClient: httpClient);
kernelBuilder.Plugins.AddFromType<T>();
Kernel kernel = kernelBuilder.Build();
return kernel;
}
} The request that get's send the second time that "What is the weather like in Boston?" is asked looks like this: Note: There are always matching pairs of tool call requests and responses. {
"messages": [
{
"content": "What is the weather like in Boston?",
"role": "user"
},
{
"content": null,
"tool_calls": [
{
"function": {
"name": "WeatherPlugin-GetWeather",
"arguments": "{\u0022location\u0022:\u0022Boston, MA\u0022}"
},
"type": "function",
"id": "call_WJIWldzAxcZIRoODZ6kPv3VJ"
}
],
"role": "assistant"
},
{
"content": "12\u00B0C\nWind: 11 KMPH\nHumidity: 48%\nMostly cloudy\nLocation: Boston, MA",
"tool_call_id": "call_WJIWldzAxcZIRoODZ6kPv3VJ",
"role": "tool"
},
{
"content": "The current weather in Boston, MA is mostly cloudy with a temperature of 12\u00B0C. The wind is blowing at 11 KMPH, and the humidity level is 48%.",
"role": "assistant"
},
{
"content": "What is the weather like in Paris?",
"role": "user"
},
{
"content": null,
"tool_calls": [
{
"function": {
"name": "WeatherPlugin-GetWeather",
"arguments": "{\u0022location\u0022:\u0022Paris, France\u0022}"
},
"type": "function",
"id": "call_KyVQ1UPhZmNPTytYzvA4TsOV"
}
],
"role": "assistant"
},
{
"content": "12\u00B0C\nWind: 11 KMPH\nHumidity: 48%\nMostly cloudy\nLocation: Paris, France",
"tool_call_id": "call_KyVQ1UPhZmNPTytYzvA4TsOV",
"role": "tool"
},
{
"content": "The current weather in Paris, France is mostly cloudy with a temperature of 12\u00B0C. The wind is blowing at 11 KMPH, and the humidity level is 48%.",
"role": "assistant"
},
{
"content": "What is the weather like in Dublin?",
"role": "user"
},
{
"content": null,
"tool_calls": [
{
"function": {
"name": "WeatherPlugin-GetWeather",
"arguments": "{\u0022location\u0022:\u0022Dublin, Ireland\u0022}"
},
"type": "function",
"id": "call_SSx7tbm3J7HqOmFfMPtrPUgX"
}
],
"role": "assistant"
},
{
"content": "12\u00B0C\nWind: 11 KMPH\nHumidity: 48%\nMostly cloudy\nLocation: Dublin, Ireland",
"tool_call_id": "call_SSx7tbm3J7HqOmFfMPtrPUgX",
"role": "tool"
},
{
"content": "The current weather in Dublin, Ireland is mostly cloudy with a temperature of 12\u00B0C. The wind is blowing at 11 KMPH, and the humidity level is 48%.",
"role": "assistant"
},
{
"content": "What is the weather like in Boston?",
"role": "user"
}
],
"temperature": 1,
"top_p": 1,
"n": 1,
"presence_penalty": 0,
"frequency_penalty": 0,
"model": "gpt-4o",
"tools": [
{
"function": {
"name": "WeatherPlugin-GetWeather",
"description": "Get the current weather in a given location.",
"parameters": {
"type": "object",
"required": [
"location"
],
"properties": {
"location": {
"type": "string",
"description": "The city and department, e.g. Marseille, 13"
}
}
}
},
"type": "function"
}
],
"tool_choice": "auto"
} The response from the LLM is shown below. In this case it doesn't request the function is called again.
For the failure you are seeing it looks like you are not sending a result for the requested function call. You always need to this this even if you are using a cached answer. I would suggest writing an |
Beta Was this translation helpful? Give feedback.
-
Link to the sample: https://github.com/microsoft/semantic-kernel/pull/6917/files#diff-c4b62626e3de8cf4418ebfb2688b1c336bac87b7c9cf3d132126fcefd9243562 |
Beta Was this translation helpful? Give feedback.
Hi @EVENFLOW212
thanks for the question.
I put together a sample to try out your scenario, here's the code